00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1002 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3669 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.073 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.117 Using shallow fetch with depth 1 00:00:00.117 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.117 > git --version # timeout=10 00:00:00.162 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.366 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.377 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.389 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.389 > git config core.sparsecheckout # timeout=10 00:00:03.400 > git read-tree -mu HEAD # timeout=10 00:00:03.415 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.437 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.437 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.513 [Pipeline] Start of Pipeline 00:00:03.524 [Pipeline] library 00:00:03.526 Loading library shm_lib@master 00:00:03.526 Library shm_lib@master is cached. Copying from home. 00:00:03.541 [Pipeline] node 00:00:03.551 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:03.553 [Pipeline] { 00:00:03.563 [Pipeline] catchError 00:00:03.565 [Pipeline] { 00:00:03.576 [Pipeline] wrap 00:00:03.584 [Pipeline] { 00:00:03.593 [Pipeline] stage 00:00:03.595 [Pipeline] { (Prologue) 00:00:03.612 [Pipeline] echo 00:00:03.613 Node: VM-host-WFP1 00:00:03.618 [Pipeline] cleanWs 00:00:03.626 [WS-CLEANUP] Deleting project workspace... 00:00:03.626 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.634 [WS-CLEANUP] done 00:00:03.817 [Pipeline] setCustomBuildProperty 00:00:03.883 [Pipeline] httpRequest 00:00:04.288 [Pipeline] echo 00:00:04.290 Sorcerer 10.211.164.101 is alive 00:00:04.298 [Pipeline] retry 00:00:04.300 [Pipeline] { 00:00:04.311 [Pipeline] httpRequest 00:00:04.315 HttpMethod: GET 00:00:04.316 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.316 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.317 Response Code: HTTP/1.1 200 OK 00:00:04.318 Success: Status code 200 is in the accepted range: 200,404 00:00:04.318 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.595 [Pipeline] } 00:00:04.611 [Pipeline] // retry 00:00:04.618 [Pipeline] sh 00:00:04.901 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.915 [Pipeline] httpRequest 00:00:06.288 [Pipeline] echo 00:00:06.290 Sorcerer 10.211.164.101 is alive 00:00:06.298 [Pipeline] retry 00:00:06.300 [Pipeline] { 00:00:06.312 [Pipeline] httpRequest 00:00:06.317 HttpMethod: GET 00:00:06.317 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.318 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.328 Response Code: HTTP/1.1 200 OK 00:00:06.328 Success: Status code 200 is in the accepted range: 200,404 00:00:06.329 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:52.614 [Pipeline] } 00:00:52.633 [Pipeline] // retry 00:00:52.641 [Pipeline] sh 00:00:52.927 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.477 [Pipeline] sh 00:00:55.760 + git -C spdk log --oneline -n5 00:00:55.760 c13c99a5e test: Various fixes for Fedora40 00:00:55.760 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:55.760 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:55.760 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:55.760 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:55.793 [Pipeline] withCredentials 00:00:55.804 > git --version # timeout=10 00:00:55.819 > git --version # 'git version 2.39.2' 00:00:55.837 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:55.839 [Pipeline] { 00:00:55.850 [Pipeline] retry 00:00:55.852 [Pipeline] { 00:00:55.866 [Pipeline] sh 00:00:56.148 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:56.159 [Pipeline] } 00:00:56.176 [Pipeline] // retry 00:00:56.182 [Pipeline] } 00:00:56.200 [Pipeline] // withCredentials 00:00:56.210 [Pipeline] httpRequest 00:00:56.631 [Pipeline] echo 00:00:56.633 Sorcerer 10.211.164.101 is alive 00:00:56.643 [Pipeline] retry 00:00:56.645 [Pipeline] { 00:00:56.660 [Pipeline] httpRequest 00:00:56.664 HttpMethod: GET 00:00:56.665 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.666 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.671 Response Code: HTTP/1.1 200 OK 00:00:56.672 Success: Status code 200 is in the accepted range: 200,404 00:00:56.672 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:15.408 [Pipeline] } 00:01:15.428 [Pipeline] // retry 00:01:15.439 [Pipeline] sh 00:01:15.721 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:17.112 [Pipeline] sh 00:01:17.395 + git -C dpdk log --oneline -n5 00:01:17.395 caf0f5d395 version: 22.11.4 00:01:17.395 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:17.395 dc9c799c7d vhost: fix missing spinlock unlock 00:01:17.395 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:17.395 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:17.414 [Pipeline] writeFile 00:01:17.431 [Pipeline] sh 00:01:17.851 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.864 [Pipeline] sh 00:01:18.147 + cat autorun-spdk.conf 00:01:18.147 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.147 SPDK_TEST_NVME=1 00:01:18.147 SPDK_TEST_FTL=1 00:01:18.147 SPDK_TEST_ISAL=1 00:01:18.147 SPDK_RUN_ASAN=1 00:01:18.147 SPDK_RUN_UBSAN=1 00:01:18.147 SPDK_TEST_XNVME=1 00:01:18.147 SPDK_TEST_NVME_FDP=1 00:01:18.147 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.147 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:18.147 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.152 RUN_NIGHTLY=1 00:01:18.154 [Pipeline] } 00:01:18.169 [Pipeline] // stage 00:01:18.183 [Pipeline] stage 00:01:18.185 [Pipeline] { (Run VM) 00:01:18.198 [Pipeline] sh 00:01:18.478 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.478 + echo 'Start stage prepare_nvme.sh' 00:01:18.478 Start stage prepare_nvme.sh 00:01:18.478 + [[ -n 6 ]] 00:01:18.478 + disk_prefix=ex6 00:01:18.478 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:18.478 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:18.478 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:18.478 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.478 ++ SPDK_TEST_NVME=1 00:01:18.478 ++ SPDK_TEST_FTL=1 00:01:18.478 ++ SPDK_TEST_ISAL=1 00:01:18.478 ++ SPDK_RUN_ASAN=1 00:01:18.478 ++ SPDK_RUN_UBSAN=1 00:01:18.478 ++ SPDK_TEST_XNVME=1 00:01:18.478 ++ SPDK_TEST_NVME_FDP=1 00:01:18.478 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.478 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:18.478 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.478 ++ RUN_NIGHTLY=1 00:01:18.478 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:18.478 + nvme_files=() 00:01:18.478 + declare -A nvme_files 00:01:18.478 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.478 + nvme_files['nvme.img']=5G 00:01:18.478 + nvme_files['nvme-cmb.img']=5G 00:01:18.478 + nvme_files['nvme-multi0.img']=4G 00:01:18.478 + nvme_files['nvme-multi1.img']=4G 00:01:18.478 + nvme_files['nvme-multi2.img']=4G 00:01:18.478 + nvme_files['nvme-openstack.img']=8G 00:01:18.478 + nvme_files['nvme-zns.img']=5G 00:01:18.478 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.478 + (( SPDK_TEST_FTL == 1 )) 00:01:18.478 + nvme_files["nvme-ftl.img"]=6G 00:01:18.478 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.478 + nvme_files["nvme-fdp.img"]=1G 00:01:18.478 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.478 + for nvme in "${!nvme_files[@]}" 00:01:18.478 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:18.478 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.478 + for nvme in "${!nvme_files[@]}" 00:01:18.478 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:19.065 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:19.324 + for nvme in "${!nvme_files[@]}" 00:01:19.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:19.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.324 + for nvme in "${!nvme_files[@]}" 00:01:19.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:19.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.324 + for nvme in "${!nvme_files[@]}" 00:01:19.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:19.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.324 + for nvme in "${!nvme_files[@]}" 00:01:19.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:19.582 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.582 + for nvme in "${!nvme_files[@]}" 00:01:19.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:19.582 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.582 + for nvme in "${!nvme_files[@]}" 00:01:19.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:19.840 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:19.840 + for nvme in "${!nvme_files[@]}" 00:01:19.840 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:20.407 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.407 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:20.407 + echo 'End stage prepare_nvme.sh' 00:01:20.407 End stage prepare_nvme.sh 00:01:20.420 [Pipeline] sh 00:01:20.700 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.700 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:20.700 00:01:20.700 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:20.700 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:20.700 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:20.700 HELP=0 00:01:20.700 DRY_RUN=0 00:01:20.700 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:20.700 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:20.700 NVME_AUTO_CREATE=0 00:01:20.700 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:20.700 NVME_CMB=,,,, 00:01:20.700 NVME_PMR=,,,, 00:01:20.700 NVME_ZNS=,,,, 00:01:20.700 NVME_MS=true,,,, 00:01:20.700 NVME_FDP=,,,on, 00:01:20.700 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.700 SPDK_VAGRANT_VMCPU=10 00:01:20.700 SPDK_VAGRANT_VMRAM=12288 00:01:20.700 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.700 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:20.700 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.700 SPDK_OPENSTACK_NETWORK=0 00:01:20.700 VAGRANT_PACKAGE_BOX=0 00:01:20.700 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.700 FORCE_DISTRO=true 00:01:20.700 VAGRANT_BOX_VERSION= 00:01:20.700 EXTRA_VAGRANTFILES= 00:01:20.700 NIC_MODEL=e1000 00:01:20.700 00:01:20.700 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:20.700 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:23.236 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.173 ==> default: Creating image (snapshot of base box volume). 00:01:24.433 ==> default: Creating domain with the following settings... 00:01:24.433 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732643260_b5d90a9d0a58879bce0e 00:01:24.433 ==> default: -- Domain type: kvm 00:01:24.433 ==> default: -- Cpus: 10 00:01:24.433 ==> default: -- Feature: acpi 00:01:24.433 ==> default: -- Feature: apic 00:01:24.433 ==> default: -- Feature: pae 00:01:24.433 ==> default: -- Memory: 12288M 00:01:24.433 ==> default: -- Memory Backing: hugepages: 00:01:24.433 ==> default: -- Management MAC: 00:01:24.433 ==> default: -- Loader: 00:01:24.433 ==> default: -- Nvram: 00:01:24.433 ==> default: -- Base box: spdk/fedora39 00:01:24.433 ==> default: -- Storage pool: default 00:01:24.433 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732643260_b5d90a9d0a58879bce0e.img (20G) 00:01:24.433 ==> default: -- Volume Cache: default 00:01:24.433 ==> default: -- Kernel: 00:01:24.433 ==> default: -- Initrd: 00:01:24.433 ==> default: -- Graphics Type: vnc 00:01:24.433 ==> default: -- Graphics Port: -1 00:01:24.433 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.433 ==> default: -- Graphics Password: Not defined 00:01:24.433 ==> default: -- Video Type: cirrus 00:01:24.433 ==> default: -- Video VRAM: 9216 00:01:24.433 ==> default: -- Sound Type: 00:01:24.433 ==> default: -- Keymap: en-us 00:01:24.433 ==> default: -- TPM Path: 00:01:24.433 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.433 ==> default: -- Command line args: 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:24.433 ==> default: -> value=-drive, 00:01:24.433 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.433 ==> default: -> value=-device, 00:01:24.433 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.001 ==> default: Creating shared folders metadata... 00:01:25.001 ==> default: Starting domain. 00:01:26.905 ==> default: Waiting for domain to get an IP address... 00:01:45.011 ==> default: Waiting for SSH to become available... 00:01:45.011 ==> default: Configuring and enabling network interfaces... 00:01:49.203 default: SSH address: 192.168.121.85:22 00:01:49.203 default: SSH username: vagrant 00:01:49.203 default: SSH auth method: private key 00:01:52.497 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:00.622 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:07.189 ==> default: Mounting SSHFS shared folder... 00:02:08.566 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.566 ==> default: Checking Mount.. 00:02:09.948 ==> default: Folder Successfully Mounted! 00:02:09.948 ==> default: Running provisioner: file... 00:02:11.324 default: ~/.gitconfig => .gitconfig 00:02:11.582 00:02:11.582 SUCCESS! 00:02:11.582 00:02:11.582 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:11.582 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:11.582 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:11.582 00:02:11.591 [Pipeline] } 00:02:11.604 [Pipeline] // stage 00:02:11.611 [Pipeline] dir 00:02:11.611 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:11.612 [Pipeline] { 00:02:11.622 [Pipeline] catchError 00:02:11.625 [Pipeline] { 00:02:11.634 [Pipeline] sh 00:02:11.911 + vagrant ssh-config --host vagrant 00:02:11.911 + sed -ne /^Host/,$p 00:02:11.911 + tee ssh_conf 00:02:15.197 Host vagrant 00:02:15.197 HostName 192.168.121.85 00:02:15.197 User vagrant 00:02:15.197 Port 22 00:02:15.197 UserKnownHostsFile /dev/null 00:02:15.197 StrictHostKeyChecking no 00:02:15.197 PasswordAuthentication no 00:02:15.197 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.197 IdentitiesOnly yes 00:02:15.197 LogLevel FATAL 00:02:15.197 ForwardAgent yes 00:02:15.197 ForwardX11 yes 00:02:15.197 00:02:15.210 [Pipeline] withEnv 00:02:15.212 [Pipeline] { 00:02:15.224 [Pipeline] sh 00:02:15.506 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:15.506 source /etc/os-release 00:02:15.506 [[ -e /image.version ]] && img=$(< /image.version) 00:02:15.506 # Minimal, systemd-like check. 00:02:15.506 if [[ -e /.dockerenv ]]; then 00:02:15.506 # Clear garbage from the node's name: 00:02:15.506 # agt-er_autotest_547-896 -> autotest_547-896 00:02:15.506 # $HOSTNAME is the actual container id 00:02:15.506 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:15.506 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:15.506 # We can assume this is a mount from a host where container is running, 00:02:15.506 # so fetch its hostname to easily identify the target swarm worker. 00:02:15.506 container="$(< /etc/hostname) ($agent)" 00:02:15.506 else 00:02:15.506 # Fallback 00:02:15.506 container=$agent 00:02:15.506 fi 00:02:15.506 fi 00:02:15.506 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:15.506 00:02:15.776 [Pipeline] } 00:02:15.793 [Pipeline] // withEnv 00:02:15.802 [Pipeline] setCustomBuildProperty 00:02:15.818 [Pipeline] stage 00:02:15.821 [Pipeline] { (Tests) 00:02:15.839 [Pipeline] sh 00:02:16.120 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:16.397 [Pipeline] sh 00:02:16.680 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:16.955 [Pipeline] timeout 00:02:16.955 Timeout set to expire in 50 min 00:02:16.957 [Pipeline] { 00:02:16.971 [Pipeline] sh 00:02:17.253 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:17.820 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:17.832 [Pipeline] sh 00:02:18.115 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:18.387 [Pipeline] sh 00:02:18.669 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:18.943 [Pipeline] sh 00:02:19.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:19.482 ++ readlink -f spdk_repo 00:02:19.482 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.482 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.482 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.482 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.482 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.482 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.482 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.482 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:19.482 + cd /home/vagrant/spdk_repo 00:02:19.482 + source /etc/os-release 00:02:19.482 ++ NAME='Fedora Linux' 00:02:19.482 ++ VERSION='39 (Cloud Edition)' 00:02:19.482 ++ ID=fedora 00:02:19.482 ++ VERSION_ID=39 00:02:19.482 ++ VERSION_CODENAME= 00:02:19.482 ++ PLATFORM_ID=platform:f39 00:02:19.482 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:19.482 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.482 ++ LOGO=fedora-logo-icon 00:02:19.482 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:19.482 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.482 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:19.482 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.482 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.482 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.482 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:19.482 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.482 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:19.482 ++ SUPPORT_END=2024-11-12 00:02:19.482 ++ VARIANT='Cloud Edition' 00:02:19.482 ++ VARIANT_ID=cloud 00:02:19.482 + uname -a 00:02:19.482 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:19.482 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:19.741 Hugepages 00:02:19.741 node hugesize free / total 00:02:19.741 node0 1048576kB 0 / 0 00:02:19.741 node0 2048kB 0 / 0 00:02:19.741 00:02:19.741 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:19.741 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:19.741 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:19.741 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:19.741 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:19.741 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:19.741 + rm -f /tmp/spdk-ld-path 00:02:20.000 + source autorun-spdk.conf 00:02:20.000 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.000 ++ SPDK_TEST_NVME=1 00:02:20.000 ++ SPDK_TEST_FTL=1 00:02:20.000 ++ SPDK_TEST_ISAL=1 00:02:20.000 ++ SPDK_RUN_ASAN=1 00:02:20.000 ++ SPDK_RUN_UBSAN=1 00:02:20.000 ++ SPDK_TEST_XNVME=1 00:02:20.000 ++ SPDK_TEST_NVME_FDP=1 00:02:20.000 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:20.000 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.000 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.000 ++ RUN_NIGHTLY=1 00:02:20.000 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.000 + [[ -n '' ]] 00:02:20.000 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.000 + for M in /var/spdk/build-*-manifest.txt 00:02:20.000 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.000 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.000 + for M in /var/spdk/build-*-manifest.txt 00:02:20.000 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.000 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.000 + for M in /var/spdk/build-*-manifest.txt 00:02:20.000 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.000 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.000 ++ uname 00:02:20.000 + [[ Linux == \L\i\n\u\x ]] 00:02:20.000 + sudo dmesg -T 00:02:20.000 + sudo dmesg --clear 00:02:20.000 + dmesg_pid=5941 00:02:20.000 + [[ Fedora Linux == FreeBSD ]] 00:02:20.000 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.000 + sudo dmesg -Tw 00:02:20.000 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.000 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.000 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.000 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.000 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.000 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.000 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.000 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.000 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.000 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.000 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.000 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.000 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.000 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.000 Test configuration: 00:02:20.000 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.000 SPDK_TEST_NVME=1 00:02:20.000 SPDK_TEST_FTL=1 00:02:20.000 SPDK_TEST_ISAL=1 00:02:20.000 SPDK_RUN_ASAN=1 00:02:20.000 SPDK_RUN_UBSAN=1 00:02:20.000 SPDK_TEST_XNVME=1 00:02:20.000 SPDK_TEST_NVME_FDP=1 00:02:20.000 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:20.000 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.000 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.260 RUN_NIGHTLY=1 17:48:36 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:20.260 17:48:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.260 17:48:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.260 17:48:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.260 17:48:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.260 17:48:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.260 17:48:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.260 17:48:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.260 17:48:36 -- paths/export.sh@5 -- $ export PATH 00:02:20.260 17:48:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.260 17:48:36 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.260 17:48:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:20.260 17:48:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732643316.XXXXXX 00:02:20.260 17:48:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732643316.AUs6cF 00:02:20.260 17:48:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:20.260 17:48:36 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:20.260 17:48:36 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.260 17:48:36 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:20.260 17:48:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.260 17:48:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.260 17:48:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:20.260 17:48:36 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:20.260 17:48:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.260 17:48:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:20.260 17:48:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.260 17:48:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.260 17:48:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.260 17:48:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.260 Tue Nov 26 05:48:37 PM UTC 2024 00:02:20.260 17:48:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.260 LTS-67-gc13c99a5e 00:02:20.260 17:48:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:20.260 17:48:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:20.260 17:48:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:20.260 17:48:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:20.260 17:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.260 ************************************ 00:02:20.260 START TEST asan 00:02:20.260 ************************************ 00:02:20.260 using asan 00:02:20.260 17:48:37 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:20.260 00:02:20.260 real 0m0.001s 00:02:20.260 user 0m0.000s 00:02:20.260 sys 0m0.000s 00:02:20.260 17:48:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:20.260 17:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.260 ************************************ 00:02:20.260 END TEST asan 00:02:20.260 ************************************ 00:02:20.260 17:48:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.260 17:48:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.260 17:48:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:20.260 17:48:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:20.260 17:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.260 ************************************ 00:02:20.260 START TEST ubsan 00:02:20.260 ************************************ 00:02:20.260 using ubsan 00:02:20.260 17:48:37 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:20.260 00:02:20.260 real 0m0.000s 00:02:20.260 user 0m0.000s 00:02:20.260 sys 0m0.000s 00:02:20.260 17:48:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:20.260 17:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.260 ************************************ 00:02:20.260 END TEST ubsan 00:02:20.260 ************************************ 00:02:20.520 17:48:37 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:20.520 17:48:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:20.520 17:48:37 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:20.520 17:48:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:20.520 17:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.520 ************************************ 00:02:20.520 START TEST build_native_dpdk 00:02:20.520 ************************************ 00:02:20.520 17:48:37 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:20.520 17:48:37 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:20.520 17:48:37 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:20.520 17:48:37 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:20.520 17:48:37 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:20.520 17:48:37 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:20.520 17:48:37 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:20.520 17:48:37 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:20.520 17:48:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:20.520 17:48:37 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:20.520 17:48:37 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:20.520 17:48:37 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:20.520 17:48:37 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.520 17:48:37 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:20.520 caf0f5d395 version: 22.11.4 00:02:20.520 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:20.520 dc9c799c7d vhost: fix missing spinlock unlock 00:02:20.520 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:20.520 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:20.520 17:48:37 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:20.520 17:48:37 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:20.520 17:48:37 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:20.520 17:48:37 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:20.520 17:48:37 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:20.520 17:48:37 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:20.520 17:48:37 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:20.520 17:48:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:20.520 17:48:37 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:20.520 17:48:37 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:20.520 17:48:37 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:20.520 17:48:37 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:20.520 17:48:37 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:20.520 17:48:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:20.520 17:48:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:20.520 17:48:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:20.520 17:48:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:20.520 17:48:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.520 17:48:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:20.520 17:48:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:20.520 17:48:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:20.520 17:48:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:20.520 17:48:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:20.520 17:48:37 -- scripts/common.sh@343 -- $ case "$op" in 00:02:20.520 17:48:37 -- scripts/common.sh@344 -- $ : 1 00:02:20.520 17:48:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:20.520 17:48:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.520 17:48:37 -- scripts/common.sh@364 -- $ decimal 22 00:02:20.520 17:48:37 -- scripts/common.sh@352 -- $ local d=22 00:02:20.520 17:48:37 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:20.520 17:48:37 -- scripts/common.sh@354 -- $ echo 22 00:02:20.520 17:48:37 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:20.520 17:48:37 -- scripts/common.sh@365 -- $ decimal 21 00:02:20.520 17:48:37 -- scripts/common.sh@352 -- $ local d=21 00:02:20.520 17:48:37 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:20.520 17:48:37 -- scripts/common.sh@354 -- $ echo 21 00:02:20.520 17:48:37 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:20.520 17:48:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:20.521 17:48:37 -- scripts/common.sh@366 -- $ return 1 00:02:20.521 17:48:37 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:20.521 patching file config/rte_config.h 00:02:20.521 Hunk #1 succeeded at 60 (offset 1 line). 00:02:20.521 17:48:37 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:20.521 17:48:37 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:20.521 17:48:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:20.521 17:48:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:20.521 17:48:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:20.521 17:48:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:20.521 17:48:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.521 17:48:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:20.521 17:48:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:20.521 17:48:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:20.521 17:48:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:20.521 17:48:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:20.521 17:48:37 -- scripts/common.sh@343 -- $ case "$op" in 00:02:20.521 17:48:37 -- scripts/common.sh@344 -- $ : 1 00:02:20.521 17:48:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:20.521 17:48:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.521 17:48:37 -- scripts/common.sh@364 -- $ decimal 22 00:02:20.521 17:48:37 -- scripts/common.sh@352 -- $ local d=22 00:02:20.521 17:48:37 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:20.521 17:48:37 -- scripts/common.sh@354 -- $ echo 22 00:02:20.521 17:48:37 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:20.521 17:48:37 -- scripts/common.sh@365 -- $ decimal 24 00:02:20.521 17:48:37 -- scripts/common.sh@352 -- $ local d=24 00:02:20.521 17:48:37 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:20.521 17:48:37 -- scripts/common.sh@354 -- $ echo 24 00:02:20.521 17:48:37 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:20.521 17:48:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:20.521 17:48:37 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:20.521 17:48:37 -- scripts/common.sh@367 -- $ return 0 00:02:20.521 17:48:37 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:20.521 patching file lib/pcapng/rte_pcapng.c 00:02:20.521 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:20.521 17:48:37 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:20.521 17:48:37 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:20.521 17:48:37 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:20.521 17:48:37 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:20.521 17:48:37 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.149 The Meson build system 00:02:27.149 Version: 1.5.0 00:02:27.149 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:27.149 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:27.149 Build type: native build 00:02:27.149 Program cat found: YES (/usr/bin/cat) 00:02:27.149 Project name: DPDK 00:02:27.149 Project version: 22.11.4 00:02:27.149 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:27.149 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:27.149 Host machine cpu family: x86_64 00:02:27.149 Host machine cpu: x86_64 00:02:27.149 Message: ## Building in Developer Mode ## 00:02:27.149 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.149 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:27.149 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.149 Program objdump found: YES (/usr/bin/objdump) 00:02:27.149 Program python3 found: YES (/usr/bin/python3) 00:02:27.149 Program cat found: YES (/usr/bin/cat) 00:02:27.149 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:27.149 Checking for size of "void *" : 8 00:02:27.149 Checking for size of "void *" : 8 (cached) 00:02:27.149 Library m found: YES 00:02:27.149 Library numa found: YES 00:02:27.149 Has header "numaif.h" : YES 00:02:27.149 Library fdt found: NO 00:02:27.149 Library execinfo found: NO 00:02:27.149 Has header "execinfo.h" : YES 00:02:27.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.149 Run-time dependency openssl found: YES 3.1.1 00:02:27.149 Run-time dependency libpcap found: YES 1.10.4 00:02:27.149 Has header "pcap.h" with dependency libpcap: YES 00:02:27.149 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.149 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.149 Compiler for C supports arguments -Wformat: YES 00:02:27.149 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:27.149 Compiler for C supports arguments -Wformat-security: NO 00:02:27.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.149 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.149 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.149 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.149 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.149 Compiler for C supports arguments -Wundef: YES 00:02:27.149 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:27.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:27.149 Compiler for C supports arguments -mavx512f: YES 00:02:27.149 Checking if "AVX512 checking" compiles: YES 00:02:27.149 Fetching value of define "__SSE4_2__" : 1 00:02:27.149 Fetching value of define "__AES__" : 1 00:02:27.149 Fetching value of define "__AVX__" : 1 00:02:27.149 Fetching value of define "__AVX2__" : 1 00:02:27.149 Fetching value of define "__AVX512BW__" : 1 00:02:27.149 Fetching value of define "__AVX512CD__" : 1 00:02:27.149 Fetching value of define "__AVX512DQ__" : 1 00:02:27.149 Fetching value of define "__AVX512F__" : 1 00:02:27.149 Fetching value of define "__AVX512VL__" : 1 00:02:27.149 Fetching value of define "__PCLMUL__" : 1 00:02:27.149 Fetching value of define "__RDRND__" : 1 00:02:27.149 Fetching value of define "__RDSEED__" : 1 00:02:27.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:27.149 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.149 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.149 Checking for function "getentropy" : YES 00:02:27.149 Message: lib/eal: Defining dependency "eal" 00:02:27.149 Message: lib/ring: Defining dependency "ring" 00:02:27.149 Message: lib/rcu: Defining dependency "rcu" 00:02:27.149 Message: lib/mempool: Defining dependency "mempool" 00:02:27.149 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:27.149 Compiler for C supports arguments -mpclmul: YES 00:02:27.149 Compiler for C supports arguments -maes: YES 00:02:27.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.149 Compiler for C supports arguments -mavx512bw: YES 00:02:27.149 Compiler for C supports arguments -mavx512dq: YES 00:02:27.149 Compiler for C supports arguments -mavx512vl: YES 00:02:27.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.149 Compiler for C supports arguments -mavx2: YES 00:02:27.149 Compiler for C supports arguments -mavx: YES 00:02:27.149 Message: lib/net: Defining dependency "net" 00:02:27.149 Message: lib/meter: Defining dependency "meter" 00:02:27.149 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.149 Message: lib/pci: Defining dependency "pci" 00:02:27.149 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.149 Message: lib/metrics: Defining dependency "metrics" 00:02:27.149 Message: lib/hash: Defining dependency "hash" 00:02:27.149 Message: lib/timer: Defining dependency "timer" 00:02:27.149 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.149 Message: lib/acl: Defining dependency "acl" 00:02:27.149 Message: lib/bbdev: Defining dependency "bbdev" 00:02:27.149 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:27.149 Run-time dependency libelf found: YES 0.191 00:02:27.149 Message: lib/bpf: Defining dependency "bpf" 00:02:27.149 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:27.149 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.149 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.149 Message: lib/distributor: Defining dependency "distributor" 00:02:27.149 Message: lib/efd: Defining dependency "efd" 00:02:27.149 Message: lib/eventdev: Defining dependency "eventdev" 00:02:27.149 Message: lib/gpudev: Defining dependency "gpudev" 00:02:27.149 Message: lib/gro: Defining dependency "gro" 00:02:27.149 Message: lib/gso: Defining dependency "gso" 00:02:27.149 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:27.149 Message: lib/jobstats: Defining dependency "jobstats" 00:02:27.149 Message: lib/latencystats: Defining dependency "latencystats" 00:02:27.149 Message: lib/lpm: Defining dependency "lpm" 00:02:27.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.149 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.150 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:27.150 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:27.150 Message: lib/member: Defining dependency "member" 00:02:27.150 Message: lib/pcapng: Defining dependency "pcapng" 00:02:27.150 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.150 Message: lib/power: Defining dependency "power" 00:02:27.150 Message: lib/rawdev: Defining dependency "rawdev" 00:02:27.150 Message: lib/regexdev: Defining dependency "regexdev" 00:02:27.150 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.150 Message: lib/rib: Defining dependency "rib" 00:02:27.150 Message: lib/reorder: Defining dependency "reorder" 00:02:27.150 Message: lib/sched: Defining dependency "sched" 00:02:27.150 Message: lib/security: Defining dependency "security" 00:02:27.150 Message: lib/stack: Defining dependency "stack" 00:02:27.150 Has header "linux/userfaultfd.h" : YES 00:02:27.150 Message: lib/vhost: Defining dependency "vhost" 00:02:27.150 Message: lib/ipsec: Defining dependency "ipsec" 00:02:27.150 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.150 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.150 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.150 Message: lib/fib: Defining dependency "fib" 00:02:27.150 Message: lib/port: Defining dependency "port" 00:02:27.150 Message: lib/pdump: Defining dependency "pdump" 00:02:27.150 Message: lib/table: Defining dependency "table" 00:02:27.150 Message: lib/pipeline: Defining dependency "pipeline" 00:02:27.150 Message: lib/graph: Defining dependency "graph" 00:02:27.150 Message: lib/node: Defining dependency "node" 00:02:27.150 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.150 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.150 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.150 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.150 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.150 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.150 Compiler for C supports arguments -Wno-format: YES 00:02:27.150 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.150 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.150 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.408 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.408 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.409 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.409 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.409 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.409 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.409 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.409 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.409 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.409 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.409 Configuring doxy-api.conf using configuration 00:02:27.409 Program sphinx-build found: NO 00:02:27.409 Configuring rte_build_config.h using configuration 00:02:27.409 Message: 00:02:27.409 ================= 00:02:27.409 Applications Enabled 00:02:27.409 ================= 00:02:27.409 00:02:27.409 apps: 00:02:27.409 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:27.409 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:27.409 test-security-perf, 00:02:27.409 00:02:27.409 Message: 00:02:27.409 ================= 00:02:27.409 Libraries Enabled 00:02:27.409 ================= 00:02:27.409 00:02:27.409 libs: 00:02:27.409 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:27.409 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:27.409 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:27.409 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:27.409 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:27.409 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:27.409 table, pipeline, graph, node, 00:02:27.409 00:02:27.409 Message: 00:02:27.409 =============== 00:02:27.409 Drivers Enabled 00:02:27.409 =============== 00:02:27.409 00:02:27.409 common: 00:02:27.409 00:02:27.409 bus: 00:02:27.409 pci, vdev, 00:02:27.409 mempool: 00:02:27.409 ring, 00:02:27.409 dma: 00:02:27.409 00:02:27.409 net: 00:02:27.409 i40e, 00:02:27.409 raw: 00:02:27.409 00:02:27.409 crypto: 00:02:27.409 00:02:27.409 compress: 00:02:27.409 00:02:27.409 regex: 00:02:27.409 00:02:27.409 vdpa: 00:02:27.409 00:02:27.409 event: 00:02:27.409 00:02:27.409 baseband: 00:02:27.409 00:02:27.409 gpu: 00:02:27.409 00:02:27.409 00:02:27.409 Message: 00:02:27.409 ================= 00:02:27.409 Content Skipped 00:02:27.409 ================= 00:02:27.409 00:02:27.409 apps: 00:02:27.409 00:02:27.409 libs: 00:02:27.409 kni: explicitly disabled via build config (deprecated lib) 00:02:27.409 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:27.409 00:02:27.409 drivers: 00:02:27.409 common/cpt: not in enabled drivers build config 00:02:27.409 common/dpaax: not in enabled drivers build config 00:02:27.409 common/iavf: not in enabled drivers build config 00:02:27.409 common/idpf: not in enabled drivers build config 00:02:27.409 common/mvep: not in enabled drivers build config 00:02:27.409 common/octeontx: not in enabled drivers build config 00:02:27.409 bus/auxiliary: not in enabled drivers build config 00:02:27.409 bus/dpaa: not in enabled drivers build config 00:02:27.409 bus/fslmc: not in enabled drivers build config 00:02:27.409 bus/ifpga: not in enabled drivers build config 00:02:27.409 bus/vmbus: not in enabled drivers build config 00:02:27.409 common/cnxk: not in enabled drivers build config 00:02:27.409 common/mlx5: not in enabled drivers build config 00:02:27.409 common/qat: not in enabled drivers build config 00:02:27.409 common/sfc_efx: not in enabled drivers build config 00:02:27.409 mempool/bucket: not in enabled drivers build config 00:02:27.409 mempool/cnxk: not in enabled drivers build config 00:02:27.409 mempool/dpaa: not in enabled drivers build config 00:02:27.409 mempool/dpaa2: not in enabled drivers build config 00:02:27.409 mempool/octeontx: not in enabled drivers build config 00:02:27.409 mempool/stack: not in enabled drivers build config 00:02:27.409 dma/cnxk: not in enabled drivers build config 00:02:27.409 dma/dpaa: not in enabled drivers build config 00:02:27.409 dma/dpaa2: not in enabled drivers build config 00:02:27.409 dma/hisilicon: not in enabled drivers build config 00:02:27.409 dma/idxd: not in enabled drivers build config 00:02:27.409 dma/ioat: not in enabled drivers build config 00:02:27.409 dma/skeleton: not in enabled drivers build config 00:02:27.409 net/af_packet: not in enabled drivers build config 00:02:27.409 net/af_xdp: not in enabled drivers build config 00:02:27.409 net/ark: not in enabled drivers build config 00:02:27.409 net/atlantic: not in enabled drivers build config 00:02:27.409 net/avp: not in enabled drivers build config 00:02:27.409 net/axgbe: not in enabled drivers build config 00:02:27.409 net/bnx2x: not in enabled drivers build config 00:02:27.409 net/bnxt: not in enabled drivers build config 00:02:27.409 net/bonding: not in enabled drivers build config 00:02:27.409 net/cnxk: not in enabled drivers build config 00:02:27.409 net/cxgbe: not in enabled drivers build config 00:02:27.409 net/dpaa: not in enabled drivers build config 00:02:27.409 net/dpaa2: not in enabled drivers build config 00:02:27.409 net/e1000: not in enabled drivers build config 00:02:27.409 net/ena: not in enabled drivers build config 00:02:27.409 net/enetc: not in enabled drivers build config 00:02:27.409 net/enetfec: not in enabled drivers build config 00:02:27.409 net/enic: not in enabled drivers build config 00:02:27.409 net/failsafe: not in enabled drivers build config 00:02:27.409 net/fm10k: not in enabled drivers build config 00:02:27.409 net/gve: not in enabled drivers build config 00:02:27.409 net/hinic: not in enabled drivers build config 00:02:27.409 net/hns3: not in enabled drivers build config 00:02:27.409 net/iavf: not in enabled drivers build config 00:02:27.409 net/ice: not in enabled drivers build config 00:02:27.409 net/idpf: not in enabled drivers build config 00:02:27.409 net/igc: not in enabled drivers build config 00:02:27.409 net/ionic: not in enabled drivers build config 00:02:27.409 net/ipn3ke: not in enabled drivers build config 00:02:27.409 net/ixgbe: not in enabled drivers build config 00:02:27.409 net/kni: not in enabled drivers build config 00:02:27.409 net/liquidio: not in enabled drivers build config 00:02:27.409 net/mana: not in enabled drivers build config 00:02:27.409 net/memif: not in enabled drivers build config 00:02:27.409 net/mlx4: not in enabled drivers build config 00:02:27.409 net/mlx5: not in enabled drivers build config 00:02:27.409 net/mvneta: not in enabled drivers build config 00:02:27.409 net/mvpp2: not in enabled drivers build config 00:02:27.409 net/netvsc: not in enabled drivers build config 00:02:27.409 net/nfb: not in enabled drivers build config 00:02:27.409 net/nfp: not in enabled drivers build config 00:02:27.409 net/ngbe: not in enabled drivers build config 00:02:27.409 net/null: not in enabled drivers build config 00:02:27.409 net/octeontx: not in enabled drivers build config 00:02:27.409 net/octeon_ep: not in enabled drivers build config 00:02:27.409 net/pcap: not in enabled drivers build config 00:02:27.409 net/pfe: not in enabled drivers build config 00:02:27.409 net/qede: not in enabled drivers build config 00:02:27.409 net/ring: not in enabled drivers build config 00:02:27.409 net/sfc: not in enabled drivers build config 00:02:27.409 net/softnic: not in enabled drivers build config 00:02:27.409 net/tap: not in enabled drivers build config 00:02:27.409 net/thunderx: not in enabled drivers build config 00:02:27.409 net/txgbe: not in enabled drivers build config 00:02:27.409 net/vdev_netvsc: not in enabled drivers build config 00:02:27.409 net/vhost: not in enabled drivers build config 00:02:27.409 net/virtio: not in enabled drivers build config 00:02:27.409 net/vmxnet3: not in enabled drivers build config 00:02:27.409 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.409 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.409 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.409 raw/ifpga: not in enabled drivers build config 00:02:27.409 raw/ntb: not in enabled drivers build config 00:02:27.409 raw/skeleton: not in enabled drivers build config 00:02:27.409 crypto/armv8: not in enabled drivers build config 00:02:27.409 crypto/bcmfs: not in enabled drivers build config 00:02:27.409 crypto/caam_jr: not in enabled drivers build config 00:02:27.409 crypto/ccp: not in enabled drivers build config 00:02:27.409 crypto/cnxk: not in enabled drivers build config 00:02:27.409 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.409 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.409 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.409 crypto/mlx5: not in enabled drivers build config 00:02:27.409 crypto/mvsam: not in enabled drivers build config 00:02:27.409 crypto/nitrox: not in enabled drivers build config 00:02:27.409 crypto/null: not in enabled drivers build config 00:02:27.409 crypto/octeontx: not in enabled drivers build config 00:02:27.409 crypto/openssl: not in enabled drivers build config 00:02:27.409 crypto/scheduler: not in enabled drivers build config 00:02:27.409 crypto/uadk: not in enabled drivers build config 00:02:27.409 crypto/virtio: not in enabled drivers build config 00:02:27.409 compress/isal: not in enabled drivers build config 00:02:27.409 compress/mlx5: not in enabled drivers build config 00:02:27.409 compress/octeontx: not in enabled drivers build config 00:02:27.409 compress/zlib: not in enabled drivers build config 00:02:27.409 regex/mlx5: not in enabled drivers build config 00:02:27.409 regex/cn9k: not in enabled drivers build config 00:02:27.409 vdpa/ifc: not in enabled drivers build config 00:02:27.409 vdpa/mlx5: not in enabled drivers build config 00:02:27.409 vdpa/sfc: not in enabled drivers build config 00:02:27.409 event/cnxk: not in enabled drivers build config 00:02:27.409 event/dlb2: not in enabled drivers build config 00:02:27.409 event/dpaa: not in enabled drivers build config 00:02:27.409 event/dpaa2: not in enabled drivers build config 00:02:27.409 event/dsw: not in enabled drivers build config 00:02:27.409 event/opdl: not in enabled drivers build config 00:02:27.410 event/skeleton: not in enabled drivers build config 00:02:27.410 event/sw: not in enabled drivers build config 00:02:27.410 event/octeontx: not in enabled drivers build config 00:02:27.410 baseband/acc: not in enabled drivers build config 00:02:27.410 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.410 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.410 baseband/la12xx: not in enabled drivers build config 00:02:27.410 baseband/null: not in enabled drivers build config 00:02:27.410 baseband/turbo_sw: not in enabled drivers build config 00:02:27.410 gpu/cuda: not in enabled drivers build config 00:02:27.410 00:02:27.410 00:02:27.410 Build targets in project: 311 00:02:27.410 00:02:27.410 DPDK 22.11.4 00:02:27.410 00:02:27.410 User defined options 00:02:27.410 libdir : lib 00:02:27.410 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.410 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:27.410 c_link_args : 00:02:27.410 enable_docs : false 00:02:27.410 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.410 enable_kmods : false 00:02:27.410 machine : native 00:02:27.410 tests : false 00:02:27.410 00:02:27.410 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.410 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.410 17:48:44 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.410 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:27.668 [1/740] Generating lib/rte_kvargs_def with a custom command 00:02:27.668 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:27.668 [3/740] Generating lib/rte_telemetry_def with a custom command 00:02:27.668 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:27.668 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.668 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.668 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.668 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.668 [9/740] Linking static target lib/librte_kvargs.a 00:02:27.668 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.668 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.668 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.668 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.668 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.926 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.926 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.926 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.926 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.926 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.926 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.926 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.926 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:27.926 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:27.926 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:27.926 [25/740] Linking target lib/librte_kvargs.so.23.0 00:02:27.926 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.926 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.926 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.926 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.183 [30/740] Linking static target lib/librte_telemetry.a 00:02:28.183 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.183 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.183 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.183 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.183 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.183 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.183 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.183 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.183 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.183 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.183 [41/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:28.441 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.441 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.441 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.441 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.441 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.441 [47/740] Linking target lib/librte_telemetry.so.23.0 00:02:28.441 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.441 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.441 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.441 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.441 [52/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:28.441 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.441 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.441 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.441 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.699 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:28.699 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.699 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.699 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.699 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.699 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.699 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.699 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.699 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.699 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:28.699 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.699 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:28.699 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.699 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.699 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.699 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:28.699 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.699 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.699 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.699 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.957 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.957 [78/740] Generating lib/rte_eal_mingw with a custom command 00:02:28.957 [79/740] Generating lib/rte_eal_def with a custom command 00:02:28.957 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.957 [81/740] Generating lib/rte_ring_mingw with a custom command 00:02:28.957 [82/740] Generating lib/rte_ring_def with a custom command 00:02:28.957 [83/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:28.957 [84/740] Generating lib/rte_rcu_def with a custom command 00:02:28.957 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:02:28.957 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.957 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.957 [88/740] Linking static target lib/librte_ring.a 00:02:28.957 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.957 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:28.957 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:29.215 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.215 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.215 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.215 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.215 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:29.215 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:29.215 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:29.215 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.215 [100/740] Linking static target lib/librte_eal.a 00:02:29.215 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.474 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.474 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.474 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.474 [105/740] Linking static target lib/librte_rcu.a 00:02:29.732 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.733 [107/740] Linking static target lib/librte_mempool.a 00:02:29.733 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.733 [109/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.733 [110/740] Generating lib/rte_net_def with a custom command 00:02:29.733 [111/740] Generating lib/rte_net_mingw with a custom command 00:02:29.733 [112/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.733 [113/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.733 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.733 [115/740] Generating lib/rte_meter_def with a custom command 00:02:29.733 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:29.733 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.733 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.992 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.992 [120/740] Linking static target lib/librte_meter.a 00:02:29.992 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.992 [122/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.992 [123/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.251 [124/740] Linking static target lib/librte_mbuf.a 00:02:30.251 [125/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.251 [126/740] Linking static target lib/librte_net.a 00:02:30.251 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.251 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.251 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.251 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.251 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.251 [132/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.511 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.511 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.511 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.770 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.770 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:30.770 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.770 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:30.770 [140/740] Generating lib/rte_pci_def with a custom command 00:02:30.770 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:30.770 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.770 [143/740] Linking static target lib/librte_pci.a 00:02:30.770 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.770 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.771 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.771 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.029 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.029 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.029 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.029 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.029 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.029 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.029 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.029 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.029 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.029 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:31.289 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.289 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:31.289 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:31.289 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.289 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:31.289 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.289 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.289 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:31.289 [166/740] Generating lib/rte_hash_def with a custom command 00:02:31.289 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.289 [168/740] Linking static target lib/librte_cmdline.a 00:02:31.289 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:31.289 [170/740] Generating lib/rte_timer_def with a custom command 00:02:31.289 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.289 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:31.548 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.548 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:31.548 [175/740] Linking static target lib/librte_metrics.a 00:02:31.548 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:31.548 [177/740] Linking static target lib/librte_timer.a 00:02:31.807 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.807 [179/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:31.807 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:32.066 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.066 [182/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:32.066 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.066 [184/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.066 [185/740] Generating lib/rte_acl_def with a custom command 00:02:32.324 [186/740] Linking static target lib/librte_ethdev.a 00:02:32.324 [187/740] Generating lib/rte_acl_mingw with a custom command 00:02:32.324 [188/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:32.324 [189/740] Generating lib/rte_bbdev_def with a custom command 00:02:32.324 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:32.324 [191/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:32.324 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:32.324 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:32.582 [194/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:32.582 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:32.582 [196/740] Linking static target lib/librte_bitratestats.a 00:02:32.840 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:32.840 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:32.840 [199/740] Linking static target lib/librte_bbdev.a 00:02:32.840 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.098 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:33.098 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:33.356 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:33.357 [204/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.357 [205/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.357 [206/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:33.357 [207/740] Linking static target lib/librte_hash.a 00:02:33.615 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:33.873 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:33.873 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:33.873 [211/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:33.873 [212/740] Generating lib/rte_bpf_mingw with a custom command 00:02:33.873 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:33.873 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:34.132 [215/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.132 [216/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:34.132 [217/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:34.132 [218/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:34.132 [219/740] Linking static target lib/librte_cfgfile.a 00:02:34.132 [220/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:34.132 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:34.132 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:34.391 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:34.391 [224/740] Linking static target lib/librte_bpf.a 00:02:34.391 [225/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:34.391 [226/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.391 [227/740] Linking static target lib/librte_acl.a 00:02:34.391 [228/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.391 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:34.391 [230/740] Generating lib/rte_cryptodev_def with a custom command 00:02:34.391 [231/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:34.391 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.650 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.650 [234/740] Linking static target lib/librte_compressdev.a 00:02:34.650 [235/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.650 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:34.650 [237/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.650 [238/740] Generating lib/rte_distributor_mingw with a custom command 00:02:34.650 [239/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.650 [240/740] Generating lib/rte_efd_def with a custom command 00:02:34.650 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:34.910 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:34.910 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:35.169 [244/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:35.169 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:35.169 [246/740] Linking static target lib/librte_distributor.a 00:02:35.169 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:35.428 [248/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.428 [249/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.428 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:35.687 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:35.687 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:35.687 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:35.687 [254/740] Linking static target lib/librte_efd.a 00:02:35.945 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:35.945 [256/740] Generating lib/rte_gpudev_def with a custom command 00:02:35.945 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:35.945 [258/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.204 [259/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:36.204 [260/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:36.204 [261/740] Linking static target lib/librte_gpudev.a 00:02:36.204 [262/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.204 [263/740] Linking static target lib/librte_cryptodev.a 00:02:36.204 [264/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.463 [265/740] Linking target lib/librte_eal.so.23.0 00:02:36.463 [266/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:36.463 [267/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:36.463 [268/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:36.463 [269/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:36.463 [270/740] Generating lib/rte_gro_def with a custom command 00:02:36.463 [271/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:36.463 [272/740] Linking target lib/librte_ring.so.23.0 00:02:36.463 [273/740] Linking target lib/librte_meter.so.23.0 00:02:36.463 [274/740] Linking target lib/librte_pci.so.23.0 00:02:36.723 [275/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:36.723 [276/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:36.723 [277/740] Linking target lib/librte_timer.so.23.0 00:02:36.723 [278/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:36.723 [279/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:36.723 [280/740] Linking target lib/librte_rcu.so.23.0 00:02:36.723 [281/740] Linking target lib/librte_acl.so.23.0 00:02:36.723 [282/740] Linking target lib/librte_mempool.so.23.0 00:02:36.723 [283/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:36.723 [284/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.723 [285/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:36.723 [286/740] Linking target lib/librte_cfgfile.so.23.0 00:02:36.723 [287/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:36.723 [288/740] Generating lib/rte_gro_mingw with a custom command 00:02:36.982 [289/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:36.982 [290/740] Linking target lib/librte_mbuf.so.23.0 00:02:36.982 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:36.982 [292/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:36.982 [293/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.982 [294/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:36.982 [295/740] Linking static target lib/librte_gro.a 00:02:36.982 [296/740] Linking target lib/librte_net.so.23.0 00:02:36.982 [297/740] Linking target lib/librte_bbdev.so.23.0 00:02:36.982 [298/740] Linking target lib/librte_compressdev.so.23.0 00:02:37.240 [299/740] Linking target lib/librte_distributor.so.23.0 00:02:37.240 [300/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:37.240 [301/740] Linking static target lib/librte_eventdev.a 00:02:37.240 [302/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:37.240 [303/740] Linking target lib/librte_gpudev.so.23.0 00:02:37.240 [304/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:37.240 [305/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:37.240 [306/740] Linking target lib/librte_cmdline.so.23.0 00:02:37.240 [307/740] Linking target lib/librte_hash.so.23.0 00:02:37.240 [308/740] Linking target lib/librte_ethdev.so.23.0 00:02:37.240 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:37.240 [310/740] Generating lib/rte_gso_def with a custom command 00:02:37.240 [311/740] Generating lib/rte_gso_mingw with a custom command 00:02:37.240 [312/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:37.240 [313/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:37.499 [314/740] Linking target lib/librte_efd.so.23.0 00:02:37.499 [315/740] Linking target lib/librte_metrics.so.23.0 00:02:37.499 [316/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:37.499 [317/740] Linking target lib/librte_bpf.so.23.0 00:02:37.499 [318/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:37.499 [319/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.499 [320/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:37.499 [321/740] Linking static target lib/librte_gso.a 00:02:37.499 [322/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:37.499 [323/740] Linking target lib/librte_gro.so.23.0 00:02:37.499 [324/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:37.499 [325/740] Linking target lib/librte_bitratestats.so.23.0 00:02:37.499 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:37.499 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:37.758 [328/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.758 [329/740] Linking target lib/librte_gso.so.23.0 00:02:37.758 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:37.758 [331/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:37.758 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:37.758 [333/740] Linking static target lib/librte_jobstats.a 00:02:37.758 [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:37.758 [335/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:37.758 [336/740] Generating lib/rte_latencystats_def with a custom command 00:02:37.758 [337/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:38.017 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:38.017 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:38.017 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:38.017 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:38.017 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.017 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:38.017 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:38.017 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:38.017 [346/740] Linking static target lib/librte_ip_frag.a 00:02:38.276 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:38.276 [348/740] Linking static target lib/librte_latencystats.a 00:02:38.276 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:38.534 [350/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.534 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:38.534 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:38.534 [353/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:38.534 [354/740] Generating lib/rte_member_def with a custom command 00:02:38.534 [355/740] Linking target lib/librte_ip_frag.so.23.0 00:02:38.534 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:38.534 [357/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.534 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:38.534 [359/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.534 [360/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:38.534 [361/740] Linking target lib/librte_cryptodev.so.23.0 00:02:38.534 [362/740] Linking target lib/librte_latencystats.so.23.0 00:02:38.534 [363/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:38.534 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.534 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.534 [366/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:38.793 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.793 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:38.793 [369/740] Linking static target lib/librte_lpm.a 00:02:38.793 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.793 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:39.052 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:39.052 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:39.052 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.052 [375/740] Generating lib/rte_power_def with a custom command 00:02:39.052 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:39.052 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:39.052 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:39.052 [379/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.052 [380/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:39.052 [381/740] Linking static target lib/librte_pcapng.a 00:02:39.052 [382/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.052 [383/740] Linking target lib/librte_lpm.so.23.0 00:02:39.052 [384/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.311 [385/740] Generating lib/rte_regexdev_def with a custom command 00:02:39.311 [386/740] Linking target lib/librte_eventdev.so.23.0 00:02:39.311 [387/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:39.311 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.311 [389/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:39.311 [390/740] Generating lib/rte_dmadev_def with a custom command 00:02:39.311 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:39.311 [392/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:39.311 [393/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:39.311 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:39.311 [395/740] Generating lib/rte_rib_def with a custom command 00:02:39.311 [396/740] Linking static target lib/librte_rawdev.a 00:02:39.311 [397/740] Generating lib/rte_rib_mingw with a custom command 00:02:39.311 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:39.311 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:39.573 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.573 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:39.573 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.573 [403/740] Linking static target lib/librte_power.a 00:02:39.573 [404/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:39.574 [405/740] Linking static target lib/librte_regexdev.a 00:02:39.574 [406/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.574 [407/740] Linking static target lib/librte_dmadev.a 00:02:39.574 [408/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:39.574 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:39.832 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.832 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:39.832 [412/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:39.832 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:39.832 [414/740] Linking target lib/librte_rawdev.so.23.0 00:02:39.832 [415/740] Linking static target lib/librte_member.a 00:02:39.832 [416/740] Generating lib/rte_sched_def with a custom command 00:02:39.832 [417/740] Generating lib/rte_sched_mingw with a custom command 00:02:39.832 [418/740] Generating lib/rte_security_def with a custom command 00:02:39.832 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.832 [420/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:39.832 [421/740] Linking static target lib/librte_reorder.a 00:02:39.832 [422/740] Generating lib/rte_security_mingw with a custom command 00:02:40.090 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:40.090 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:40.090 [425/740] Generating lib/rte_stack_def with a custom command 00:02:40.090 [426/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:40.090 [427/740] Linking static target lib/librte_rib.a 00:02:40.090 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:40.090 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:40.090 [430/740] Linking static target lib/librte_stack.a 00:02:40.090 [431/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.090 [432/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.090 [433/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.090 [434/740] Linking target lib/librte_dmadev.so.23.0 00:02:40.090 [435/740] Linking target lib/librte_reorder.so.23.0 00:02:40.347 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.347 [437/740] Linking target lib/librte_member.so.23.0 00:02:40.347 [438/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.347 [439/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:40.347 [440/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.347 [441/740] Linking target lib/librte_stack.so.23.0 00:02:40.347 [442/740] Linking target lib/librte_regexdev.so.23.0 00:02:40.347 [443/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:40.347 [444/740] Linking static target lib/librte_security.a 00:02:40.606 [445/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.606 [446/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.606 [447/740] Linking target lib/librte_rib.so.23.0 00:02:40.606 [448/740] Linking target lib/librte_power.so.23.0 00:02:40.606 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:40.606 [450/740] Generating lib/rte_vhost_def with a custom command 00:02:40.606 [451/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.606 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:40.865 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.865 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.865 [455/740] Linking target lib/librte_security.so.23.0 00:02:40.865 [456/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:40.865 [457/740] Linking static target lib/librte_sched.a 00:02:40.865 [458/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.865 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:41.125 [460/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.384 [461/740] Linking target lib/librte_sched.so.23.0 00:02:41.384 [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:41.384 [463/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:41.384 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.384 [465/740] Generating lib/rte_ipsec_def with a custom command 00:02:41.384 [466/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:41.384 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:41.384 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:41.643 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.643 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:41.643 [471/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:41.643 [472/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:41.643 [473/740] Generating lib/rte_fib_def with a custom command 00:02:41.902 [474/740] Generating lib/rte_fib_mingw with a custom command 00:02:41.902 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:42.160 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:42.160 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:42.160 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:42.160 [479/740] Linking static target lib/librte_ipsec.a 00:02:42.418 [480/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:42.419 [481/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:42.419 [482/740] Linking static target lib/librte_fib.a 00:02:42.419 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:42.419 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:42.419 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.677 [486/740] Linking target lib/librte_ipsec.so.23.0 00:02:42.677 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:42.677 [488/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.677 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:42.677 [490/740] Linking target lib/librte_fib.so.23.0 00:02:42.677 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:43.245 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.245 [493/740] Generating lib/rte_port_def with a custom command 00:02:43.245 [494/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.245 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:43.245 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:43.245 [497/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.245 [498/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.245 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:02:43.245 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:43.504 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:43.504 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:43.504 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:43.504 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:43.504 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:43.763 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:43.763 [507/740] Linking static target lib/librte_port.a 00:02:43.763 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:43.763 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:43.763 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.021 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.021 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:44.281 [513/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.281 [514/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.281 [515/740] Linking static target lib/librte_pdump.a 00:02:44.281 [516/740] Linking target lib/librte_port.so.23.0 00:02:44.281 [517/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:44.541 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:44.541 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:44.541 [520/740] Generating lib/rte_table_def with a custom command 00:02:44.541 [521/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.541 [522/740] Generating lib/rte_table_mingw with a custom command 00:02:44.541 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:44.541 [524/740] Linking target lib/librte_pdump.so.23.0 00:02:44.800 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:44.800 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:44.800 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:44.800 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:44.800 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:44.800 [530/740] Generating lib/rte_pipeline_def with a custom command 00:02:44.800 [531/740] Linking static target lib/librte_table.a 00:02:44.800 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:44.800 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:45.368 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:45.368 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:45.368 [536/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.368 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:45.368 [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [539/740] Linking target lib/librte_table.so.23.0 00:02:45.627 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:45.627 [541/740] Generating lib/rte_graph_def with a custom command 00:02:45.627 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:45.627 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:45.627 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:45.627 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:45.887 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:45.887 [547/740] Linking static target lib/librte_graph.a 00:02:45.887 [548/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:45.887 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:46.147 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:46.147 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:46.147 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:46.405 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:46.405 [554/740] Generating lib/rte_node_def with a custom command 00:02:46.405 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:46.405 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.405 [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:46.665 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:46.665 [559/740] Linking target lib/librte_graph.so.23.0 00:02:46.665 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.665 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:46.665 [562/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:46.665 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.665 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.665 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:46.665 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:46.665 [567/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:46.666 [568/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.666 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:46.924 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:46.924 [571/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:46.924 [572/740] Linking static target lib/librte_node.a 00:02:46.924 [573/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.924 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:46.924 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:46.924 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:46.924 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:46.924 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.924 [579/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.186 [580/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:47.186 [581/740] Linking target lib/librte_node.so.23.0 00:02:47.186 [582/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.186 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.186 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.186 [585/740] Linking static target drivers/librte_bus_vdev.a 00:02:47.186 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.186 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.186 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:47.473 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.473 [590/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:47.473 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.473 [592/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.473 [593/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:47.473 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:47.473 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:47.743 [596/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:47.743 [597/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.743 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:47.743 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.743 [600/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.743 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:47.743 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:48.002 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:48.002 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.002 [605/740] Linking static target drivers/librte_mempool_ring.a 00:02:48.002 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.002 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:48.261 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:48.520 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:48.520 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:48.520 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:48.779 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:49.038 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:49.038 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:49.038 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:49.298 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:49.557 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:49.557 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:49.557 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:49.816 [620/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:49.816 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:50.385 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:50.385 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:50.644 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:50.644 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:50.644 [626/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:50.903 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:50.903 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:50.903 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:50.903 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:50.904 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:51.163 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:51.423 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:51.423 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:51.681 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:51.681 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:51.681 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:51.681 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:51.940 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:51.940 [640/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.940 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:51.940 [642/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.940 [643/740] Linking static target drivers/librte_net_i40e.a 00:02:51.940 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:51.940 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:52.199 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:52.199 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:52.461 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:52.461 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:52.722 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:52.722 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.722 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:52.981 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:52.981 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:52.981 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:52.981 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:52.981 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:52.981 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:52.981 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:53.240 [660/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.240 [661/740] Linking static target lib/librte_vhost.a 00:02:53.240 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:53.240 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:53.240 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:53.498 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:53.498 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:53.756 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:53.756 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:54.324 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:54.324 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:54.324 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:54.324 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.324 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:54.324 [674/740] Linking target lib/librte_vhost.so.23.0 00:02:54.583 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:54.583 [676/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:54.583 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:54.843 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:54.843 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:54.843 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:55.101 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:55.101 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.101 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.101 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.101 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.359 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.359 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:55.359 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.359 [689/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:55.359 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:55.359 [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.927 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:55.927 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:55.927 [694/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:55.927 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.205 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.469 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:56.469 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:56.469 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:56.728 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.728 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:56.728 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:56.987 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:56.987 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.246 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:57.246 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:57.246 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.506 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:57.764 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:57.764 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:58.023 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.023 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:58.023 [713/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.023 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.281 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.281 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:58.281 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:58.281 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.848 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:59.786 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:59.786 [721/740] Linking static target lib/librte_pipeline.a 00:03:00.045 [722/740] Linking target app/dpdk-dumpcap 00:03:00.045 [723/740] Linking target app/dpdk-test-cmdline 00:03:00.045 [724/740] Linking target app/dpdk-proc-info 00:03:00.045 [725/740] Linking target app/dpdk-pdump 00:03:00.045 [726/740] Linking target app/dpdk-test-bbdev 00:03:00.045 [727/740] Linking target app/dpdk-test-compress-perf 00:03:00.305 [728/740] Linking target app/dpdk-test-acl 00:03:00.305 [729/740] Linking target app/dpdk-test-eventdev 00:03:00.305 [730/740] Linking target app/dpdk-test-crypto-perf 00:03:00.564 [731/740] Linking target app/dpdk-test-gpudev 00:03:00.564 [732/740] Linking target app/dpdk-test-fib 00:03:00.564 [733/740] Linking target app/dpdk-test-regex 00:03:00.564 [734/740] Linking target app/dpdk-test-flow-perf 00:03:00.564 [735/740] Linking target app/dpdk-test-pipeline 00:03:00.564 [736/740] Linking target app/dpdk-test-security-perf 00:03:00.564 [737/740] Linking target app/dpdk-test-sad 00:03:00.564 [738/740] Linking target app/dpdk-testpmd 00:03:05.840 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.840 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:05.840 17:49:22 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:05.840 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:05.840 [0/1] Installing files. 00:03:05.840 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.840 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.841 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.842 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.843 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.844 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:05.845 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:05.845 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.845 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.846 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.846 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.846 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:05.846 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:05.846 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:05.846 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.108 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.109 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.110 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:06.111 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:06.111 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:06.111 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:06.111 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:06.111 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:06.111 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:06.111 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:06.111 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:06.111 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:06.111 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:06.111 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:06.111 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:06.111 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:06.111 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:06.111 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:06.111 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:06.111 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:06.111 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:06.111 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:06.111 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:06.111 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:06.111 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:06.112 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:06.112 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:06.112 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:06.112 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:06.112 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:06.112 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:06.112 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:06.112 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:06.112 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:06.112 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:06.112 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:06.112 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:06.112 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:06.112 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:06.112 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:06.112 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:06.112 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:06.112 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:06.112 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:06.112 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:06.112 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:06.112 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:06.112 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:06.112 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:06.112 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:06.112 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:06.112 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:06.112 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:06.112 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:06.112 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:06.112 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:06.112 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:06.112 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:06.112 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:06.112 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:06.112 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:06.112 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:06.112 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:06.112 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:06.112 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:06.112 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:06.112 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:06.112 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:06.112 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:06.112 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:06.112 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:06.112 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:06.112 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:06.112 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:06.112 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:06.112 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:06.112 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:06.112 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:06.112 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:06.112 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:06.112 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:06.112 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:06.112 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:06.112 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:06.112 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:06.112 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:06.112 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:06.112 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:06.112 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:06.112 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:06.112 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:06.112 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:06.112 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:06.112 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:06.112 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:06.112 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:06.112 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:06.112 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:06.112 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:06.112 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:06.112 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:06.112 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:06.112 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:06.112 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:06.112 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:06.112 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:06.112 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:06.112 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:06.112 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:06.112 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:06.112 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:06.112 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:06.112 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:06.112 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:06.112 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:06.112 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:06.112 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:06.112 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:06.112 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:06.112 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:06.112 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:06.112 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:06.112 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:06.112 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:06.112 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:06.112 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:06.112 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:06.112 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:06.112 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:06.112 17:49:22 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:06.112 17:49:22 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:06.112 17:49:22 -- common/autobuild_common.sh@203 -- $ cat 00:03:06.112 17:49:22 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:06.112 00:03:06.112 real 0m45.768s 00:03:06.112 user 4m20.062s 00:03:06.112 sys 0m59.746s 00:03:06.112 17:49:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:06.113 ************************************ 00:03:06.113 END TEST build_native_dpdk 00:03:06.113 ************************************ 00:03:06.113 17:49:22 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.372 17:49:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:06.372 17:49:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:06.372 17:49:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:06.372 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:06.632 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.632 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:06.632 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:06.891 Using 'verbs' RDMA provider 00:03:22.710 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:37.595 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:37.855 Creating mk/config.mk...done. 00:03:37.855 Creating mk/cc.flags.mk...done. 00:03:37.855 Type 'make' to build. 00:03:37.855 17:49:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:37.855 17:49:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:37.855 17:49:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:37.855 17:49:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.855 ************************************ 00:03:37.855 START TEST make 00:03:37.855 ************************************ 00:03:37.855 17:49:54 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:38.115 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:38.116 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:38.116 meson setup builddir \ 00:03:38.116 -Dwith-libaio=enabled \ 00:03:38.116 -Dwith-liburing=enabled \ 00:03:38.116 -Dwith-libvfn=disabled \ 00:03:38.116 -Dwith-spdk=false && \ 00:03:38.116 meson compile -C builddir && \ 00:03:38.116 cd -) 00:03:38.376 make[1]: Nothing to be done for 'all'. 00:03:40.911 The Meson build system 00:03:40.911 Version: 1.5.0 00:03:40.911 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:40.911 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:40.911 Build type: native build 00:03:40.911 Project name: xnvme 00:03:40.911 Project version: 0.7.3 00:03:40.911 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.911 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:40.911 Host machine cpu family: x86_64 00:03:40.911 Host machine cpu: x86_64 00:03:40.911 Message: host_machine.system: linux 00:03:40.911 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:40.911 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:40.911 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:40.911 Run-time dependency threads found: YES 00:03:40.911 Has header "setupapi.h" : NO 00:03:40.911 Has header "linux/blkzoned.h" : YES 00:03:40.911 Has header "linux/blkzoned.h" : YES (cached) 00:03:40.911 Has header "libaio.h" : YES 00:03:40.911 Library aio found: YES 00:03:40.911 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.911 Run-time dependency liburing found: YES 2.2 00:03:40.911 Dependency libvfn skipped: feature with-libvfn disabled 00:03:40.911 Run-time dependency appleframeworks found: NO (tried framework) 00:03:40.911 Run-time dependency appleframeworks found: NO (tried framework) 00:03:40.911 Configuring xnvme_config.h using configuration 00:03:40.911 Configuring xnvme.spec using configuration 00:03:40.911 Run-time dependency bash-completion found: YES 2.11 00:03:40.911 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:40.911 Program cp found: YES (/usr/bin/cp) 00:03:40.911 Has header "winsock2.h" : NO 00:03:40.911 Has header "dbghelp.h" : NO 00:03:40.911 Library rpcrt4 found: NO 00:03:40.911 Library rt found: YES 00:03:40.911 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:40.911 Found CMake: /usr/bin/cmake (3.27.7) 00:03:40.911 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:40.911 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:40.911 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:40.911 Build targets in project: 32 00:03:40.911 00:03:40.911 xnvme 0.7.3 00:03:40.911 00:03:40.911 User defined options 00:03:40.911 with-libaio : enabled 00:03:40.911 with-liburing: enabled 00:03:40.911 with-libvfn : disabled 00:03:40.911 with-spdk : false 00:03:40.911 00:03:40.911 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.911 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:40.911 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:40.911 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:40.911 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:40.911 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:40.911 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:40.911 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:40.911 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:40.911 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:40.911 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:40.911 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:40.911 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:41.170 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:41.171 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:41.171 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:41.171 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:41.171 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:41.171 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:41.171 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:41.171 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:41.171 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:41.171 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:41.171 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:41.171 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:41.171 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:41.171 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:41.171 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:41.171 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:41.171 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:41.171 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:41.171 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:41.171 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:41.171 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:41.171 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:41.171 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:41.171 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:41.171 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:41.171 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:41.171 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:41.171 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:41.171 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:41.171 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:41.171 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:41.171 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:41.171 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:41.435 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:41.436 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:41.436 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:41.436 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:41.436 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:41.436 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:41.436 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:41.436 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:41.436 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:41.436 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:41.436 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:41.436 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:41.436 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:41.436 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:41.436 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:41.436 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:41.436 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:41.436 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:41.436 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:41.436 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:41.436 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:41.436 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:41.436 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:41.436 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:41.709 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:41.709 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:41.709 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:41.709 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:41.709 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:41.709 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:41.709 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:41.709 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:41.709 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:41.709 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:41.709 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:41.709 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:41.709 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:41.709 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:41.709 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:41.709 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:41.709 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:41.709 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:41.709 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:41.972 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:41.972 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:41.972 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:41.972 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:41.972 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:41.972 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:41.972 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:41.972 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:41.972 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:41.972 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:41.972 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:41.972 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:41.972 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:41.972 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:41.972 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:41.972 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:41.972 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:41.972 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:41.972 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:41.972 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:41.972 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:41.972 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:41.972 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:41.972 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:41.972 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:41.972 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:41.972 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:41.972 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:41.972 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:41.972 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:41.972 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:41.972 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:41.972 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:41.972 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:42.232 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:42.232 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:42.232 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:42.232 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:42.232 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:42.232 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:42.232 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:42.232 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:42.232 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:42.232 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:42.232 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:42.232 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:42.232 [134/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:42.232 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:42.232 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:42.232 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:42.232 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:42.232 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:42.232 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:42.232 [141/203] Linking target lib/libxnvme.so 00:03:42.232 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:42.490 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:42.490 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:42.490 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:42.490 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:42.490 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:42.490 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:42.490 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:42.490 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:42.490 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:42.490 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:42.490 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:42.490 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:42.490 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:42.490 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:42.490 [157/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:42.490 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:42.490 [159/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:42.748 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:42.748 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:42.748 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:42.748 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:42.748 [164/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:42.748 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:42.748 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:42.748 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:42.748 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:42.748 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:42.748 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:42.748 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:42.748 [172/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:43.008 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:43.008 [174/203] Linking static target lib/libxnvme.a 00:03:43.008 [175/203] Linking target tests/xnvme_tests_async_intf 00:03:43.008 [176/203] Linking target tests/xnvme_tests_enum 00:03:43.008 [177/203] Linking target tests/xnvme_tests_buf 00:03:43.008 [178/203] Linking target tests/xnvme_tests_xnvme_file 00:03:43.008 [179/203] Linking target tests/xnvme_tests_cli 00:03:43.008 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:43.008 [181/203] Linking target tests/xnvme_tests_lblk 00:03:43.008 [182/203] Linking target tests/xnvme_tests_znd_state 00:03:43.008 [183/203] Linking target tests/xnvme_tests_scc 00:03:43.008 [184/203] Linking target tests/xnvme_tests_ioworker 00:03:43.008 [185/203] Linking target tests/xnvme_tests_znd_append 00:03:43.008 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:43.008 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:43.008 [188/203] Linking target tests/xnvme_tests_kvs 00:03:43.008 [189/203] Linking target tests/xnvme_tests_map 00:03:43.008 [190/203] Linking target examples/xnvme_dev 00:03:43.008 [191/203] Linking target tools/xnvme 00:03:43.008 [192/203] Linking target tools/xdd 00:03:43.008 [193/203] Linking target tools/zoned 00:03:43.008 [194/203] Linking target tools/kvs 00:03:43.008 [195/203] Linking target tools/lblk 00:03:43.008 [196/203] Linking target tools/xnvme_file 00:03:43.008 [197/203] Linking target examples/xnvme_enum 00:03:43.008 [198/203] Linking target examples/xnvme_io_async 00:03:43.008 [199/203] Linking target examples/xnvme_hello 00:03:43.008 [200/203] Linking target examples/zoned_io_async 00:03:43.008 [201/203] Linking target examples/xnvme_single_sync 00:03:43.008 [202/203] Linking target examples/zoned_io_sync 00:03:43.008 [203/203] Linking target examples/xnvme_single_async 00:03:43.008 INFO: autodetecting backend as ninja 00:03:43.008 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:43.267 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:58.297 CC lib/log/log.o 00:03:58.297 CC lib/log/log_deprecated.o 00:03:58.297 CC lib/log/log_flags.o 00:03:58.297 CC lib/ut_mock/mock.o 00:03:58.297 CC lib/ut/ut.o 00:03:58.297 LIB libspdk_ut_mock.a 00:03:58.297 SO libspdk_ut_mock.so.5.0 00:03:58.297 LIB libspdk_log.a 00:03:58.297 LIB libspdk_ut.a 00:03:58.297 SO libspdk_log.so.6.1 00:03:58.297 SYMLINK libspdk_ut_mock.so 00:03:58.297 SO libspdk_ut.so.1.0 00:03:58.297 SYMLINK libspdk_ut.so 00:03:58.297 SYMLINK libspdk_log.so 00:03:58.297 CC lib/dma/dma.o 00:03:58.297 CXX lib/trace_parser/trace.o 00:03:58.297 CC lib/util/bit_array.o 00:03:58.297 CC lib/util/cpuset.o 00:03:58.297 CC lib/util/base64.o 00:03:58.297 CC lib/util/crc32.o 00:03:58.297 CC lib/util/crc32c.o 00:03:58.297 CC lib/util/crc16.o 00:03:58.297 CC lib/ioat/ioat.o 00:03:58.297 CC lib/vfio_user/host/vfio_user_pci.o 00:03:58.297 CC lib/util/crc32_ieee.o 00:03:58.555 CC lib/util/crc64.o 00:03:58.555 CC lib/util/dif.o 00:03:58.555 CC lib/util/fd.o 00:03:58.555 LIB libspdk_dma.a 00:03:58.555 CC lib/util/file.o 00:03:58.555 SO libspdk_dma.so.3.0 00:03:58.555 CC lib/vfio_user/host/vfio_user.o 00:03:58.555 CC lib/util/hexlify.o 00:03:58.555 CC lib/util/iov.o 00:03:58.555 SYMLINK libspdk_dma.so 00:03:58.555 CC lib/util/math.o 00:03:58.555 LIB libspdk_ioat.a 00:03:58.555 CC lib/util/pipe.o 00:03:58.555 SO libspdk_ioat.so.6.0 00:03:58.555 CC lib/util/strerror_tls.o 00:03:58.555 CC lib/util/string.o 00:03:58.555 CC lib/util/uuid.o 00:03:58.555 SYMLINK libspdk_ioat.so 00:03:58.555 CC lib/util/fd_group.o 00:03:58.814 CC lib/util/xor.o 00:03:58.814 CC lib/util/zipf.o 00:03:58.814 LIB libspdk_vfio_user.a 00:03:58.814 SO libspdk_vfio_user.so.4.0 00:03:58.814 SYMLINK libspdk_vfio_user.so 00:03:59.073 LIB libspdk_util.a 00:03:59.073 SO libspdk_util.so.8.0 00:03:59.337 SYMLINK libspdk_util.so 00:03:59.337 LIB libspdk_trace_parser.a 00:03:59.337 SO libspdk_trace_parser.so.4.0 00:03:59.617 CC lib/rdma/common.o 00:03:59.617 CC lib/idxd/idxd.o 00:03:59.617 CC lib/rdma/rdma_verbs.o 00:03:59.617 CC lib/idxd/idxd_kernel.o 00:03:59.617 CC lib/idxd/idxd_user.o 00:03:59.617 CC lib/vmd/vmd.o 00:03:59.617 CC lib/json/json_parse.o 00:03:59.617 CC lib/conf/conf.o 00:03:59.617 CC lib/env_dpdk/env.o 00:03:59.617 SYMLINK libspdk_trace_parser.so 00:03:59.617 CC lib/vmd/led.o 00:03:59.617 CC lib/json/json_util.o 00:03:59.617 CC lib/json/json_write.o 00:03:59.617 CC lib/env_dpdk/memory.o 00:03:59.617 LIB libspdk_conf.a 00:03:59.617 CC lib/env_dpdk/pci.o 00:03:59.617 CC lib/env_dpdk/init.o 00:03:59.617 SO libspdk_conf.so.5.0 00:03:59.617 LIB libspdk_rdma.a 00:03:59.884 SYMLINK libspdk_conf.so 00:03:59.884 CC lib/env_dpdk/threads.o 00:03:59.884 SO libspdk_rdma.so.5.0 00:03:59.884 SYMLINK libspdk_rdma.so 00:03:59.884 CC lib/env_dpdk/pci_ioat.o 00:03:59.884 CC lib/env_dpdk/pci_virtio.o 00:03:59.884 CC lib/env_dpdk/pci_vmd.o 00:03:59.884 LIB libspdk_json.a 00:03:59.884 CC lib/env_dpdk/pci_idxd.o 00:03:59.884 SO libspdk_json.so.5.1 00:03:59.884 CC lib/env_dpdk/pci_event.o 00:03:59.884 CC lib/env_dpdk/sigbus_handler.o 00:03:59.884 CC lib/env_dpdk/pci_dpdk.o 00:04:00.143 SYMLINK libspdk_json.so 00:04:00.143 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.143 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.143 LIB libspdk_idxd.a 00:04:00.143 SO libspdk_idxd.so.11.0 00:04:00.143 LIB libspdk_vmd.a 00:04:00.143 SYMLINK libspdk_idxd.so 00:04:00.143 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.143 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.143 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.143 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.143 SO libspdk_vmd.so.5.0 00:04:00.143 SYMLINK libspdk_vmd.so 00:04:00.401 LIB libspdk_jsonrpc.a 00:04:00.401 SO libspdk_jsonrpc.so.5.1 00:04:00.659 SYMLINK libspdk_jsonrpc.so 00:04:00.918 CC lib/rpc/rpc.o 00:04:00.918 LIB libspdk_env_dpdk.a 00:04:00.918 SO libspdk_env_dpdk.so.13.0 00:04:01.176 LIB libspdk_rpc.a 00:04:01.176 SO libspdk_rpc.so.5.0 00:04:01.176 SYMLINK libspdk_rpc.so 00:04:01.176 SYMLINK libspdk_env_dpdk.so 00:04:01.435 CC lib/trace/trace.o 00:04:01.435 CC lib/trace/trace_flags.o 00:04:01.435 CC lib/trace/trace_rpc.o 00:04:01.435 CC lib/notify/notify.o 00:04:01.435 CC lib/notify/notify_rpc.o 00:04:01.435 CC lib/sock/sock.o 00:04:01.435 CC lib/sock/sock_rpc.o 00:04:01.693 LIB libspdk_notify.a 00:04:01.693 SO libspdk_notify.so.5.0 00:04:01.693 LIB libspdk_trace.a 00:04:01.693 SO libspdk_trace.so.9.0 00:04:01.693 SYMLINK libspdk_notify.so 00:04:01.693 SYMLINK libspdk_trace.so 00:04:01.952 LIB libspdk_sock.a 00:04:01.952 SO libspdk_sock.so.8.0 00:04:01.952 SYMLINK libspdk_sock.so 00:04:01.952 CC lib/thread/thread.o 00:04:01.952 CC lib/thread/iobuf.o 00:04:02.211 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.211 CC lib/nvme/nvme_ctrlr.o 00:04:02.211 CC lib/nvme/nvme_fabric.o 00:04:02.211 CC lib/nvme/nvme_ns_cmd.o 00:04:02.211 CC lib/nvme/nvme_ns.o 00:04:02.211 CC lib/nvme/nvme_pcie_common.o 00:04:02.211 CC lib/nvme/nvme_pcie.o 00:04:02.211 CC lib/nvme/nvme_qpair.o 00:04:02.469 CC lib/nvme/nvme.o 00:04:02.727 CC lib/nvme/nvme_quirks.o 00:04:02.985 CC lib/nvme/nvme_transport.o 00:04:02.985 CC lib/nvme/nvme_discovery.o 00:04:02.985 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.985 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:03.243 CC lib/nvme/nvme_tcp.o 00:04:03.243 CC lib/nvme/nvme_opal.o 00:04:03.243 CC lib/nvme/nvme_io_msg.o 00:04:03.502 CC lib/nvme/nvme_poll_group.o 00:04:03.502 CC lib/nvme/nvme_zns.o 00:04:03.502 CC lib/nvme/nvme_cuse.o 00:04:03.502 CC lib/nvme/nvme_vfio_user.o 00:04:03.760 CC lib/nvme/nvme_rdma.o 00:04:03.760 LIB libspdk_thread.a 00:04:03.760 SO libspdk_thread.so.9.0 00:04:03.760 SYMLINK libspdk_thread.so 00:04:03.760 CC lib/blob/blobstore.o 00:04:03.760 CC lib/accel/accel.o 00:04:04.018 CC lib/accel/accel_rpc.o 00:04:04.018 CC lib/accel/accel_sw.o 00:04:04.018 CC lib/blob/request.o 00:04:04.278 CC lib/init/json_config.o 00:04:04.278 CC lib/blob/zeroes.o 00:04:04.278 CC lib/virtio/virtio.o 00:04:04.278 CC lib/blob/blob_bs_dev.o 00:04:04.278 CC lib/virtio/virtio_vhost_user.o 00:04:04.278 CC lib/virtio/virtio_vfio_user.o 00:04:04.537 CC lib/init/subsystem.o 00:04:04.537 CC lib/init/subsystem_rpc.o 00:04:04.537 CC lib/init/rpc.o 00:04:04.537 CC lib/virtio/virtio_pci.o 00:04:04.796 LIB libspdk_init.a 00:04:04.796 SO libspdk_init.so.4.0 00:04:04.796 SYMLINK libspdk_init.so 00:04:04.796 LIB libspdk_virtio.a 00:04:04.796 SO libspdk_virtio.so.6.0 00:04:05.054 LIB libspdk_accel.a 00:04:05.054 SO libspdk_accel.so.14.0 00:04:05.054 LIB libspdk_nvme.a 00:04:05.054 SYMLINK libspdk_virtio.so 00:04:05.054 CC lib/event/app_rpc.o 00:04:05.054 CC lib/event/app.o 00:04:05.054 CC lib/event/reactor.o 00:04:05.054 CC lib/event/log_rpc.o 00:04:05.054 CC lib/event/scheduler_static.o 00:04:05.054 SYMLINK libspdk_accel.so 00:04:05.312 SO libspdk_nvme.so.12.0 00:04:05.312 CC lib/bdev/part.o 00:04:05.312 CC lib/bdev/bdev_zone.o 00:04:05.312 CC lib/bdev/bdev.o 00:04:05.312 CC lib/bdev/bdev_rpc.o 00:04:05.312 CC lib/bdev/scsi_nvme.o 00:04:05.569 SYMLINK libspdk_nvme.so 00:04:05.569 LIB libspdk_event.a 00:04:05.569 SO libspdk_event.so.12.0 00:04:05.569 SYMLINK libspdk_event.so 00:04:06.971 LIB libspdk_blob.a 00:04:07.229 SO libspdk_blob.so.10.1 00:04:07.229 SYMLINK libspdk_blob.so 00:04:07.487 CC lib/lvol/lvol.o 00:04:07.487 CC lib/blobfs/blobfs.o 00:04:07.487 CC lib/blobfs/tree.o 00:04:08.052 LIB libspdk_bdev.a 00:04:08.052 SO libspdk_bdev.so.14.0 00:04:08.310 SYMLINK libspdk_bdev.so 00:04:08.310 LIB libspdk_blobfs.a 00:04:08.310 SO libspdk_blobfs.so.9.0 00:04:08.310 LIB libspdk_lvol.a 00:04:08.310 CC lib/nvmf/ctrlr.o 00:04:08.310 CC lib/nvmf/ctrlr_discovery.o 00:04:08.310 CC lib/nbd/nbd.o 00:04:08.310 CC lib/nvmf/subsystem.o 00:04:08.310 CC lib/nvmf/ctrlr_bdev.o 00:04:08.568 CC lib/scsi/dev.o 00:04:08.568 SYMLINK libspdk_blobfs.so 00:04:08.568 CC lib/ublk/ublk.o 00:04:08.568 CC lib/ftl/ftl_core.o 00:04:08.568 CC lib/ublk/ublk_rpc.o 00:04:08.568 SO libspdk_lvol.so.9.1 00:04:08.568 SYMLINK libspdk_lvol.so 00:04:08.568 CC lib/ftl/ftl_init.o 00:04:08.568 CC lib/ftl/ftl_layout.o 00:04:08.568 CC lib/scsi/lun.o 00:04:08.826 CC lib/ftl/ftl_debug.o 00:04:08.826 CC lib/ftl/ftl_io.o 00:04:08.826 CC lib/nbd/nbd_rpc.o 00:04:08.826 CC lib/ftl/ftl_sb.o 00:04:08.826 CC lib/ftl/ftl_l2p.o 00:04:08.826 CC lib/ftl/ftl_l2p_flat.o 00:04:09.084 CC lib/scsi/port.o 00:04:09.084 LIB libspdk_nbd.a 00:04:09.084 SO libspdk_nbd.so.6.0 00:04:09.084 LIB libspdk_ublk.a 00:04:09.084 CC lib/ftl/ftl_nv_cache.o 00:04:09.084 CC lib/ftl/ftl_band.o 00:04:09.084 SYMLINK libspdk_nbd.so 00:04:09.084 CC lib/nvmf/nvmf.o 00:04:09.084 SO libspdk_ublk.so.2.0 00:04:09.084 CC lib/scsi/scsi.o 00:04:09.084 CC lib/ftl/ftl_band_ops.o 00:04:09.084 CC lib/ftl/ftl_writer.o 00:04:09.084 CC lib/nvmf/nvmf_rpc.o 00:04:09.084 SYMLINK libspdk_ublk.so 00:04:09.084 CC lib/scsi/scsi_bdev.o 00:04:09.341 CC lib/scsi/scsi_pr.o 00:04:09.341 CC lib/ftl/ftl_rq.o 00:04:09.599 CC lib/ftl/ftl_reloc.o 00:04:09.599 CC lib/ftl/ftl_l2p_cache.o 00:04:09.599 CC lib/ftl/ftl_p2l.o 00:04:09.599 CC lib/ftl/mngt/ftl_mngt.o 00:04:09.599 CC lib/scsi/scsi_rpc.o 00:04:09.599 CC lib/scsi/task.o 00:04:09.857 CC lib/nvmf/transport.o 00:04:09.857 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:09.857 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:09.857 LIB libspdk_scsi.a 00:04:09.857 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:09.857 SO libspdk_scsi.so.8.0 00:04:10.115 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:10.115 CC lib/nvmf/tcp.o 00:04:10.115 CC lib/nvmf/rdma.o 00:04:10.116 SYMLINK libspdk_scsi.so 00:04:10.116 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:10.116 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:10.116 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:10.116 CC lib/vhost/vhost.o 00:04:10.116 CC lib/iscsi/conn.o 00:04:10.116 CC lib/vhost/vhost_rpc.o 00:04:10.374 CC lib/vhost/vhost_scsi.o 00:04:10.374 CC lib/vhost/vhost_blk.o 00:04:10.374 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.374 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.374 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.374 CC lib/vhost/rte_vhost_user.o 00:04:10.633 CC lib/iscsi/init_grp.o 00:04:10.633 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.892 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.892 CC lib/ftl/utils/ftl_conf.o 00:04:10.892 CC lib/iscsi/iscsi.o 00:04:10.892 CC lib/iscsi/md5.o 00:04:10.892 CC lib/ftl/utils/ftl_md.o 00:04:10.892 CC lib/ftl/utils/ftl_mempool.o 00:04:11.151 CC lib/iscsi/param.o 00:04:11.151 CC lib/iscsi/portal_grp.o 00:04:11.151 CC lib/iscsi/tgt_node.o 00:04:11.151 CC lib/iscsi/iscsi_subsystem.o 00:04:11.151 CC lib/ftl/utils/ftl_bitmap.o 00:04:11.410 CC lib/iscsi/iscsi_rpc.o 00:04:11.410 CC lib/iscsi/task.o 00:04:11.410 CC lib/ftl/utils/ftl_property.o 00:04:11.410 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:11.410 LIB libspdk_vhost.a 00:04:11.410 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:11.668 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:11.668 SO libspdk_vhost.so.7.1 00:04:11.668 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:11.668 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:11.668 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:11.668 SYMLINK libspdk_vhost.so 00:04:11.668 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:11.668 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:11.669 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:11.669 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:11.669 CC lib/ftl/base/ftl_base_dev.o 00:04:11.669 CC lib/ftl/base/ftl_base_bdev.o 00:04:11.669 CC lib/ftl/ftl_trace.o 00:04:11.928 LIB libspdk_ftl.a 00:04:12.187 SO libspdk_ftl.so.8.0 00:04:12.187 LIB libspdk_iscsi.a 00:04:12.187 LIB libspdk_nvmf.a 00:04:12.446 SO libspdk_iscsi.so.7.0 00:04:12.446 SO libspdk_nvmf.so.17.0 00:04:12.446 SYMLINK libspdk_ftl.so 00:04:12.446 SYMLINK libspdk_iscsi.so 00:04:12.704 SYMLINK libspdk_nvmf.so 00:04:12.963 CC module/env_dpdk/env_dpdk_rpc.o 00:04:12.963 CC module/blob/bdev/blob_bdev.o 00:04:12.963 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:12.963 CC module/accel/ioat/accel_ioat.o 00:04:12.963 CC module/scheduler/gscheduler/gscheduler.o 00:04:12.963 CC module/accel/iaa/accel_iaa.o 00:04:12.963 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:12.963 CC module/sock/posix/posix.o 00:04:12.963 CC module/accel/error/accel_error.o 00:04:12.963 CC module/accel/dsa/accel_dsa.o 00:04:12.963 LIB libspdk_env_dpdk_rpc.a 00:04:12.963 SO libspdk_env_dpdk_rpc.so.5.0 00:04:13.221 LIB libspdk_scheduler_dpdk_governor.a 00:04:13.221 LIB libspdk_scheduler_gscheduler.a 00:04:13.221 SYMLINK libspdk_env_dpdk_rpc.so 00:04:13.221 CC module/accel/dsa/accel_dsa_rpc.o 00:04:13.221 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:13.221 SO libspdk_scheduler_gscheduler.so.3.0 00:04:13.221 CC module/accel/ioat/accel_ioat_rpc.o 00:04:13.221 LIB libspdk_scheduler_dynamic.a 00:04:13.221 CC module/accel/error/accel_error_rpc.o 00:04:13.221 CC module/accel/iaa/accel_iaa_rpc.o 00:04:13.221 SO libspdk_scheduler_dynamic.so.3.0 00:04:13.221 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:13.221 SYMLINK libspdk_scheduler_gscheduler.so 00:04:13.221 LIB libspdk_blob_bdev.a 00:04:13.221 SYMLINK libspdk_scheduler_dynamic.so 00:04:13.221 LIB libspdk_accel_dsa.a 00:04:13.221 SO libspdk_blob_bdev.so.10.1 00:04:13.221 LIB libspdk_accel_ioat.a 00:04:13.221 LIB libspdk_accel_error.a 00:04:13.221 SYMLINK libspdk_blob_bdev.so 00:04:13.221 SO libspdk_accel_dsa.so.4.0 00:04:13.221 LIB libspdk_accel_iaa.a 00:04:13.221 SO libspdk_accel_error.so.1.0 00:04:13.221 SO libspdk_accel_ioat.so.5.0 00:04:13.481 SO libspdk_accel_iaa.so.2.0 00:04:13.481 SYMLINK libspdk_accel_dsa.so 00:04:13.481 SYMLINK libspdk_accel_error.so 00:04:13.481 SYMLINK libspdk_accel_ioat.so 00:04:13.481 SYMLINK libspdk_accel_iaa.so 00:04:13.481 CC module/blobfs/bdev/blobfs_bdev.o 00:04:13.481 CC module/bdev/delay/vbdev_delay.o 00:04:13.481 CC module/bdev/nvme/bdev_nvme.o 00:04:13.481 CC module/bdev/gpt/gpt.o 00:04:13.481 CC module/bdev/malloc/bdev_malloc.o 00:04:13.481 CC module/bdev/error/vbdev_error.o 00:04:13.481 CC module/bdev/null/bdev_null.o 00:04:13.481 CC module/bdev/passthru/vbdev_passthru.o 00:04:13.481 CC module/bdev/lvol/vbdev_lvol.o 00:04:13.740 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:13.740 CC module/bdev/gpt/vbdev_gpt.o 00:04:13.740 LIB libspdk_sock_posix.a 00:04:13.740 CC module/bdev/null/bdev_null_rpc.o 00:04:13.740 SO libspdk_sock_posix.so.5.0 00:04:13.740 CC module/bdev/error/vbdev_error_rpc.o 00:04:13.740 LIB libspdk_blobfs_bdev.a 00:04:13.740 SO libspdk_blobfs_bdev.so.5.0 00:04:13.998 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:13.998 SYMLINK libspdk_sock_posix.so 00:04:13.998 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:13.998 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:13.998 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:13.998 SYMLINK libspdk_blobfs_bdev.so 00:04:13.998 CC module/bdev/nvme/nvme_rpc.o 00:04:13.998 LIB libspdk_bdev_null.a 00:04:13.998 LIB libspdk_bdev_error.a 00:04:13.998 LIB libspdk_bdev_gpt.a 00:04:13.998 SO libspdk_bdev_null.so.5.0 00:04:13.998 SO libspdk_bdev_error.so.5.0 00:04:13.998 SO libspdk_bdev_gpt.so.5.0 00:04:13.998 LIB libspdk_bdev_passthru.a 00:04:13.998 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:13.998 SYMLINK libspdk_bdev_null.so 00:04:13.998 SYMLINK libspdk_bdev_gpt.so 00:04:13.998 SYMLINK libspdk_bdev_error.so 00:04:13.998 SO libspdk_bdev_passthru.so.5.0 00:04:13.998 LIB libspdk_bdev_delay.a 00:04:13.998 LIB libspdk_bdev_malloc.a 00:04:13.998 SO libspdk_bdev_delay.so.5.0 00:04:13.998 SO libspdk_bdev_malloc.so.5.0 00:04:14.265 SYMLINK libspdk_bdev_passthru.so 00:04:14.265 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:14.265 CC module/bdev/split/vbdev_split.o 00:04:14.265 CC module/bdev/nvme/bdev_mdns_client.o 00:04:14.265 CC module/bdev/raid/bdev_raid.o 00:04:14.265 SYMLINK libspdk_bdev_delay.so 00:04:14.265 SYMLINK libspdk_bdev_malloc.so 00:04:14.265 CC module/bdev/nvme/vbdev_opal.o 00:04:14.265 CC module/bdev/raid/bdev_raid_rpc.o 00:04:14.265 CC module/bdev/xnvme/bdev_xnvme.o 00:04:14.265 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:14.265 LIB libspdk_bdev_lvol.a 00:04:14.265 CC module/bdev/split/vbdev_split_rpc.o 00:04:14.265 SO libspdk_bdev_lvol.so.5.0 00:04:14.532 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:14.532 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:14.532 SYMLINK libspdk_bdev_lvol.so 00:04:14.532 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:14.532 LIB libspdk_bdev_split.a 00:04:14.532 CC module/bdev/aio/bdev_aio.o 00:04:14.532 SO libspdk_bdev_split.so.5.0 00:04:14.532 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.532 LIB libspdk_bdev_zone_block.a 00:04:14.532 CC module/bdev/ftl/bdev_ftl.o 00:04:14.532 SO libspdk_bdev_zone_block.so.5.0 00:04:14.532 CC module/bdev/raid/bdev_raid_sb.o 00:04:14.532 CC module/bdev/iscsi/bdev_iscsi.o 00:04:14.532 LIB libspdk_bdev_xnvme.a 00:04:14.532 SYMLINK libspdk_bdev_split.so 00:04:14.532 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:14.532 SO libspdk_bdev_xnvme.so.2.0 00:04:14.532 SYMLINK libspdk_bdev_zone_block.so 00:04:14.791 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.791 SYMLINK libspdk_bdev_xnvme.so 00:04:14.791 CC module/bdev/raid/raid0.o 00:04:14.791 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.791 CC module/bdev/raid/raid1.o 00:04:14.791 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.791 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.791 LIB libspdk_bdev_ftl.a 00:04:14.791 LIB libspdk_bdev_aio.a 00:04:14.791 SO libspdk_bdev_ftl.so.5.0 00:04:14.791 SO libspdk_bdev_aio.so.5.0 00:04:15.049 SYMLINK libspdk_bdev_aio.so 00:04:15.049 SYMLINK libspdk_bdev_ftl.so 00:04:15.049 CC module/bdev/raid/concat.o 00:04:15.049 LIB libspdk_bdev_iscsi.a 00:04:15.049 SO libspdk_bdev_iscsi.so.5.0 00:04:15.049 SYMLINK libspdk_bdev_iscsi.so 00:04:15.307 LIB libspdk_bdev_raid.a 00:04:15.307 SO libspdk_bdev_raid.so.5.0 00:04:15.307 LIB libspdk_bdev_virtio.a 00:04:15.307 SYMLINK libspdk_bdev_raid.so 00:04:15.307 SO libspdk_bdev_virtio.so.5.0 00:04:15.567 SYMLINK libspdk_bdev_virtio.so 00:04:15.825 LIB libspdk_bdev_nvme.a 00:04:15.825 SO libspdk_bdev_nvme.so.6.0 00:04:16.083 SYMLINK libspdk_bdev_nvme.so 00:04:16.650 CC module/event/subsystems/sock/sock.o 00:04:16.650 CC module/event/subsystems/iobuf/iobuf.o 00:04:16.650 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:16.650 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:16.650 CC module/event/subsystems/scheduler/scheduler.o 00:04:16.650 CC module/event/subsystems/vmd/vmd.o 00:04:16.650 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:16.650 LIB libspdk_event_sock.a 00:04:16.650 SO libspdk_event_sock.so.4.0 00:04:16.650 LIB libspdk_event_scheduler.a 00:04:16.650 LIB libspdk_event_vhost_blk.a 00:04:16.650 LIB libspdk_event_iobuf.a 00:04:16.650 SO libspdk_event_scheduler.so.3.0 00:04:16.650 LIB libspdk_event_vmd.a 00:04:16.650 SO libspdk_event_vhost_blk.so.2.0 00:04:16.650 SYMLINK libspdk_event_sock.so 00:04:16.650 SO libspdk_event_iobuf.so.2.0 00:04:16.650 SO libspdk_event_vmd.so.5.0 00:04:16.650 SYMLINK libspdk_event_scheduler.so 00:04:16.650 SYMLINK libspdk_event_vhost_blk.so 00:04:16.650 SYMLINK libspdk_event_iobuf.so 00:04:16.909 SYMLINK libspdk_event_vmd.so 00:04:16.909 CC module/event/subsystems/accel/accel.o 00:04:17.168 LIB libspdk_event_accel.a 00:04:17.168 SO libspdk_event_accel.so.5.0 00:04:17.168 SYMLINK libspdk_event_accel.so 00:04:17.735 CC module/event/subsystems/bdev/bdev.o 00:04:17.735 LIB libspdk_event_bdev.a 00:04:17.735 SO libspdk_event_bdev.so.5.0 00:04:17.994 SYMLINK libspdk_event_bdev.so 00:04:17.994 CC module/event/subsystems/scsi/scsi.o 00:04:18.253 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.253 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.253 CC module/event/subsystems/nbd/nbd.o 00:04:18.253 CC module/event/subsystems/ublk/ublk.o 00:04:18.253 LIB libspdk_event_nbd.a 00:04:18.253 LIB libspdk_event_ublk.a 00:04:18.253 LIB libspdk_event_scsi.a 00:04:18.253 SO libspdk_event_nbd.so.5.0 00:04:18.253 SO libspdk_event_ublk.so.2.0 00:04:18.253 SO libspdk_event_scsi.so.5.0 00:04:18.253 SYMLINK libspdk_event_nbd.so 00:04:18.253 LIB libspdk_event_nvmf.a 00:04:18.253 SYMLINK libspdk_event_ublk.so 00:04:18.253 SO libspdk_event_nvmf.so.5.0 00:04:18.511 SYMLINK libspdk_event_scsi.so 00:04:18.511 SYMLINK libspdk_event_nvmf.so 00:04:18.770 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.770 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.770 LIB libspdk_event_vhost_scsi.a 00:04:18.770 LIB libspdk_event_iscsi.a 00:04:18.770 SO libspdk_event_vhost_scsi.so.2.0 00:04:18.770 SO libspdk_event_iscsi.so.5.0 00:04:19.028 SYMLINK libspdk_event_vhost_scsi.so 00:04:19.028 SYMLINK libspdk_event_iscsi.so 00:04:19.028 SO libspdk.so.5.0 00:04:19.028 SYMLINK libspdk.so 00:04:19.286 CXX app/trace/trace.o 00:04:19.286 TEST_HEADER include/spdk/accel.h 00:04:19.286 TEST_HEADER include/spdk/accel_module.h 00:04:19.286 TEST_HEADER include/spdk/assert.h 00:04:19.286 TEST_HEADER include/spdk/barrier.h 00:04:19.286 TEST_HEADER include/spdk/base64.h 00:04:19.286 TEST_HEADER include/spdk/bdev.h 00:04:19.286 TEST_HEADER include/spdk/bdev_module.h 00:04:19.286 TEST_HEADER include/spdk/bdev_zone.h 00:04:19.286 TEST_HEADER include/spdk/bit_array.h 00:04:19.286 TEST_HEADER include/spdk/bit_pool.h 00:04:19.286 TEST_HEADER include/spdk/blob_bdev.h 00:04:19.287 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:19.287 TEST_HEADER include/spdk/blobfs.h 00:04:19.287 TEST_HEADER include/spdk/blob.h 00:04:19.287 TEST_HEADER include/spdk/conf.h 00:04:19.287 TEST_HEADER include/spdk/config.h 00:04:19.287 TEST_HEADER include/spdk/cpuset.h 00:04:19.287 TEST_HEADER include/spdk/crc16.h 00:04:19.287 TEST_HEADER include/spdk/crc32.h 00:04:19.287 TEST_HEADER include/spdk/crc64.h 00:04:19.287 TEST_HEADER include/spdk/dif.h 00:04:19.287 TEST_HEADER include/spdk/dma.h 00:04:19.287 TEST_HEADER include/spdk/endian.h 00:04:19.287 CC examples/accel/perf/accel_perf.o 00:04:19.287 TEST_HEADER include/spdk/env_dpdk.h 00:04:19.287 TEST_HEADER include/spdk/env.h 00:04:19.287 TEST_HEADER include/spdk/event.h 00:04:19.287 TEST_HEADER include/spdk/fd_group.h 00:04:19.287 TEST_HEADER include/spdk/fd.h 00:04:19.287 TEST_HEADER include/spdk/file.h 00:04:19.287 TEST_HEADER include/spdk/ftl.h 00:04:19.287 TEST_HEADER include/spdk/gpt_spec.h 00:04:19.287 TEST_HEADER include/spdk/hexlify.h 00:04:19.287 TEST_HEADER include/spdk/histogram_data.h 00:04:19.287 TEST_HEADER include/spdk/idxd.h 00:04:19.287 TEST_HEADER include/spdk/idxd_spec.h 00:04:19.287 CC test/blobfs/mkfs/mkfs.o 00:04:19.287 CC examples/bdev/hello_world/hello_bdev.o 00:04:19.287 TEST_HEADER include/spdk/init.h 00:04:19.287 CC test/accel/dif/dif.o 00:04:19.287 CC test/bdev/bdevio/bdevio.o 00:04:19.287 TEST_HEADER include/spdk/ioat.h 00:04:19.287 TEST_HEADER include/spdk/ioat_spec.h 00:04:19.545 TEST_HEADER include/spdk/iscsi_spec.h 00:04:19.546 TEST_HEADER include/spdk/json.h 00:04:19.546 TEST_HEADER include/spdk/jsonrpc.h 00:04:19.546 CC test/app/bdev_svc/bdev_svc.o 00:04:19.546 TEST_HEADER include/spdk/likely.h 00:04:19.546 CC test/env/mem_callbacks/mem_callbacks.o 00:04:19.546 TEST_HEADER include/spdk/log.h 00:04:19.546 TEST_HEADER include/spdk/lvol.h 00:04:19.546 TEST_HEADER include/spdk/memory.h 00:04:19.546 TEST_HEADER include/spdk/mmio.h 00:04:19.546 CC test/dma/test_dma/test_dma.o 00:04:19.546 TEST_HEADER include/spdk/nbd.h 00:04:19.546 TEST_HEADER include/spdk/notify.h 00:04:19.546 TEST_HEADER include/spdk/nvme.h 00:04:19.546 TEST_HEADER include/spdk/nvme_intel.h 00:04:19.546 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:19.546 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:19.546 TEST_HEADER include/spdk/nvme_spec.h 00:04:19.546 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.546 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.546 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.546 TEST_HEADER include/spdk/nvmf.h 00:04:19.546 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.546 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.546 TEST_HEADER include/spdk/opal.h 00:04:19.546 TEST_HEADER include/spdk/opal_spec.h 00:04:19.546 TEST_HEADER include/spdk/pci_ids.h 00:04:19.546 TEST_HEADER include/spdk/pipe.h 00:04:19.546 TEST_HEADER include/spdk/queue.h 00:04:19.546 TEST_HEADER include/spdk/reduce.h 00:04:19.546 TEST_HEADER include/spdk/rpc.h 00:04:19.546 TEST_HEADER include/spdk/scheduler.h 00:04:19.546 TEST_HEADER include/spdk/scsi.h 00:04:19.546 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.546 TEST_HEADER include/spdk/sock.h 00:04:19.546 TEST_HEADER include/spdk/stdinc.h 00:04:19.546 TEST_HEADER include/spdk/string.h 00:04:19.546 TEST_HEADER include/spdk/thread.h 00:04:19.546 TEST_HEADER include/spdk/trace.h 00:04:19.546 TEST_HEADER include/spdk/trace_parser.h 00:04:19.546 TEST_HEADER include/spdk/tree.h 00:04:19.546 TEST_HEADER include/spdk/ublk.h 00:04:19.546 TEST_HEADER include/spdk/util.h 00:04:19.546 TEST_HEADER include/spdk/uuid.h 00:04:19.546 TEST_HEADER include/spdk/version.h 00:04:19.546 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.546 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.546 TEST_HEADER include/spdk/vhost.h 00:04:19.546 TEST_HEADER include/spdk/vmd.h 00:04:19.546 LINK mkfs 00:04:19.546 TEST_HEADER include/spdk/xor.h 00:04:19.546 TEST_HEADER include/spdk/zipf.h 00:04:19.546 CXX test/cpp_headers/accel.o 00:04:19.546 LINK bdev_svc 00:04:19.546 LINK mem_callbacks 00:04:19.804 LINK spdk_trace 00:04:19.804 LINK hello_bdev 00:04:19.804 CXX test/cpp_headers/accel_module.o 00:04:19.804 CXX test/cpp_headers/assert.o 00:04:19.804 LINK bdevio 00:04:19.805 CC test/env/vtophys/vtophys.o 00:04:19.805 LINK dif 00:04:19.805 LINK test_dma 00:04:19.805 LINK accel_perf 00:04:19.805 CC app/trace_record/trace_record.o 00:04:19.805 CXX test/cpp_headers/barrier.o 00:04:19.805 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.063 LINK vtophys 00:04:20.063 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.063 CC app/nvmf_tgt/nvmf_main.o 00:04:20.063 CXX test/cpp_headers/base64.o 00:04:20.063 CC app/iscsi_tgt/iscsi_tgt.o 00:04:20.063 CC app/spdk_tgt/spdk_tgt.o 00:04:20.063 CC test/app/histogram_perf/histogram_perf.o 00:04:20.063 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.063 LINK spdk_trace_record 00:04:20.063 CC examples/blob/hello_world/hello_blob.o 00:04:20.063 CXX test/cpp_headers/bdev.o 00:04:20.063 LINK nvmf_tgt 00:04:20.322 LINK histogram_perf 00:04:20.322 LINK spdk_tgt 00:04:20.322 LINK env_dpdk_post_init 00:04:20.322 LINK iscsi_tgt 00:04:20.322 LINK nvme_fuzz 00:04:20.322 CXX test/cpp_headers/bdev_module.o 00:04:20.322 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.322 LINK hello_blob 00:04:20.322 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:20.581 CC test/env/memory/memory_ut.o 00:04:20.581 CC examples/ioat/perf/perf.o 00:04:20.581 CXX test/cpp_headers/bdev_zone.o 00:04:20.581 CC examples/ioat/verify/verify.o 00:04:20.581 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:20.581 CC app/spdk_lspci/spdk_lspci.o 00:04:20.581 CC test/event/event_perf/event_perf.o 00:04:20.581 CC examples/blob/cli/blobcli.o 00:04:20.581 CXX test/cpp_headers/bit_array.o 00:04:20.581 LINK spdk_lspci 00:04:20.841 LINK ioat_perf 00:04:20.841 LINK verify 00:04:20.841 LINK bdevperf 00:04:20.841 LINK event_perf 00:04:20.841 CXX test/cpp_headers/bit_pool.o 00:04:20.841 CXX test/cpp_headers/blob_bdev.o 00:04:20.841 CC app/spdk_nvme_perf/perf.o 00:04:20.841 CC app/spdk_nvme_identify/identify.o 00:04:21.100 CC test/event/reactor/reactor.o 00:04:21.100 LINK vhost_fuzz 00:04:21.100 LINK memory_ut 00:04:21.100 CC app/spdk_nvme_discover/discovery_aer.o 00:04:21.100 CC app/spdk_top/spdk_top.o 00:04:21.100 CXX test/cpp_headers/blobfs_bdev.o 00:04:21.100 LINK reactor 00:04:21.100 LINK blobcli 00:04:21.359 LINK spdk_nvme_discover 00:04:21.359 CXX test/cpp_headers/blobfs.o 00:04:21.359 CC test/env/pci/pci_ut.o 00:04:21.359 CC test/event/reactor_perf/reactor_perf.o 00:04:21.359 CC test/lvol/esnap/esnap.o 00:04:21.359 CXX test/cpp_headers/blob.o 00:04:21.359 CXX test/cpp_headers/conf.o 00:04:21.359 LINK reactor_perf 00:04:21.619 CC examples/nvme/hello_world/hello_world.o 00:04:21.619 CXX test/cpp_headers/config.o 00:04:21.619 CXX test/cpp_headers/cpuset.o 00:04:21.619 CC test/event/app_repeat/app_repeat.o 00:04:21.619 CC test/event/scheduler/scheduler.o 00:04:21.619 LINK pci_ut 00:04:21.878 LINK hello_world 00:04:21.878 CXX test/cpp_headers/crc16.o 00:04:21.878 LINK spdk_nvme_perf 00:04:21.878 LINK app_repeat 00:04:21.878 LINK spdk_nvme_identify 00:04:21.878 LINK scheduler 00:04:21.878 CXX test/cpp_headers/crc32.o 00:04:22.137 CC test/app/jsoncat/jsoncat.o 00:04:22.137 LINK spdk_top 00:04:22.137 CC test/app/stub/stub.o 00:04:22.137 CC examples/nvme/reconnect/reconnect.o 00:04:22.137 CC test/nvme/aer/aer.o 00:04:22.137 CXX test/cpp_headers/crc64.o 00:04:22.137 CC examples/sock/hello_world/hello_sock.o 00:04:22.137 LINK jsoncat 00:04:22.137 LINK iscsi_fuzz 00:04:22.137 LINK stub 00:04:22.137 CC examples/vmd/lsvmd/lsvmd.o 00:04:22.396 CXX test/cpp_headers/dif.o 00:04:22.396 CC app/vhost/vhost.o 00:04:22.396 LINK lsvmd 00:04:22.396 CC examples/vmd/led/led.o 00:04:22.396 LINK aer 00:04:22.396 CXX test/cpp_headers/dma.o 00:04:22.396 LINK hello_sock 00:04:22.396 LINK reconnect 00:04:22.396 CC app/spdk_dd/spdk_dd.o 00:04:22.396 LINK vhost 00:04:22.655 LINK led 00:04:22.655 CXX test/cpp_headers/endian.o 00:04:22.655 CC test/rpc_client/rpc_client_test.o 00:04:22.655 CC app/fio/nvme/fio_plugin.o 00:04:22.655 CC test/nvme/reset/reset.o 00:04:22.655 CC test/nvme/sgl/sgl.o 00:04:22.655 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.655 CXX test/cpp_headers/env_dpdk.o 00:04:22.914 LINK rpc_client_test 00:04:22.914 CC test/thread/poller_perf/poller_perf.o 00:04:22.914 LINK spdk_dd 00:04:22.915 CC examples/nvmf/nvmf/nvmf.o 00:04:22.915 LINK reset 00:04:22.915 CXX test/cpp_headers/env.o 00:04:22.915 LINK poller_perf 00:04:22.915 LINK sgl 00:04:22.915 CXX test/cpp_headers/event.o 00:04:23.194 CC test/nvme/e2edp/nvme_dp.o 00:04:23.194 CXX test/cpp_headers/fd_group.o 00:04:23.194 CC test/nvme/overhead/overhead.o 00:04:23.194 CC test/nvme/err_injection/err_injection.o 00:04:23.194 CC app/fio/bdev/fio_plugin.o 00:04:23.194 CC test/nvme/startup/startup.o 00:04:23.194 LINK nvmf 00:04:23.194 LINK spdk_nvme 00:04:23.194 LINK nvme_manage 00:04:23.194 CXX test/cpp_headers/fd.o 00:04:23.194 CXX test/cpp_headers/file.o 00:04:23.466 LINK err_injection 00:04:23.466 LINK startup 00:04:23.466 CXX test/cpp_headers/ftl.o 00:04:23.466 LINK nvme_dp 00:04:23.466 LINK overhead 00:04:23.466 CC examples/nvme/arbitration/arbitration.o 00:04:23.466 CC test/nvme/reserve/reserve.o 00:04:23.466 CXX test/cpp_headers/gpt_spec.o 00:04:23.466 CC test/nvme/simple_copy/simple_copy.o 00:04:23.466 CC examples/util/zipf/zipf.o 00:04:23.725 CC test/nvme/connect_stress/connect_stress.o 00:04:23.725 CC test/nvme/boot_partition/boot_partition.o 00:04:23.725 CXX test/cpp_headers/hexlify.o 00:04:23.725 LINK spdk_bdev 00:04:23.725 CC test/nvme/compliance/nvme_compliance.o 00:04:23.725 LINK zipf 00:04:23.725 LINK reserve 00:04:23.725 LINK simple_copy 00:04:23.725 LINK boot_partition 00:04:23.725 LINK connect_stress 00:04:23.725 LINK arbitration 00:04:23.725 CXX test/cpp_headers/histogram_data.o 00:04:23.725 CXX test/cpp_headers/idxd.o 00:04:23.984 CXX test/cpp_headers/idxd_spec.o 00:04:23.984 CC examples/thread/thread/thread_ex.o 00:04:23.984 CC test/nvme/fused_ordering/fused_ordering.o 00:04:23.984 CXX test/cpp_headers/init.o 00:04:23.984 CXX test/cpp_headers/ioat.o 00:04:23.984 CC examples/nvme/hotplug/hotplug.o 00:04:23.984 LINK nvme_compliance 00:04:23.984 CXX test/cpp_headers/ioat_spec.o 00:04:23.984 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:23.984 CC examples/idxd/perf/perf.o 00:04:23.984 CXX test/cpp_headers/iscsi_spec.o 00:04:24.243 LINK fused_ordering 00:04:24.243 LINK thread 00:04:24.243 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:24.243 CC examples/nvme/abort/abort.o 00:04:24.243 LINK interrupt_tgt 00:04:24.243 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.243 LINK hotplug 00:04:24.243 CXX test/cpp_headers/json.o 00:04:24.243 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:24.243 LINK cmb_copy 00:04:24.502 CC test/nvme/fdp/fdp.o 00:04:24.502 CXX test/cpp_headers/jsonrpc.o 00:04:24.502 LINK idxd_perf 00:04:24.502 LINK pmr_persistence 00:04:24.502 CXX test/cpp_headers/likely.o 00:04:24.502 CXX test/cpp_headers/log.o 00:04:24.502 CC test/nvme/cuse/cuse.o 00:04:24.502 LINK doorbell_aers 00:04:24.502 CXX test/cpp_headers/lvol.o 00:04:24.502 CXX test/cpp_headers/memory.o 00:04:24.502 CXX test/cpp_headers/mmio.o 00:04:24.502 CXX test/cpp_headers/nbd.o 00:04:24.502 CXX test/cpp_headers/notify.o 00:04:24.502 LINK abort 00:04:24.502 CXX test/cpp_headers/nvme.o 00:04:24.761 CXX test/cpp_headers/nvme_intel.o 00:04:24.761 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.761 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.761 LINK fdp 00:04:24.761 CXX test/cpp_headers/nvme_spec.o 00:04:24.761 CXX test/cpp_headers/nvme_zns.o 00:04:24.761 CXX test/cpp_headers/nvmf_cmd.o 00:04:24.761 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:24.761 CXX test/cpp_headers/nvmf.o 00:04:24.761 CXX test/cpp_headers/nvmf_spec.o 00:04:24.761 CXX test/cpp_headers/nvmf_transport.o 00:04:24.761 CXX test/cpp_headers/opal.o 00:04:24.761 CXX test/cpp_headers/opal_spec.o 00:04:24.761 CXX test/cpp_headers/pci_ids.o 00:04:25.019 CXX test/cpp_headers/pipe.o 00:04:25.019 CXX test/cpp_headers/queue.o 00:04:25.019 CXX test/cpp_headers/reduce.o 00:04:25.019 CXX test/cpp_headers/rpc.o 00:04:25.019 CXX test/cpp_headers/scheduler.o 00:04:25.019 CXX test/cpp_headers/scsi.o 00:04:25.019 CXX test/cpp_headers/scsi_spec.o 00:04:25.019 CXX test/cpp_headers/sock.o 00:04:25.019 CXX test/cpp_headers/stdinc.o 00:04:25.019 CXX test/cpp_headers/string.o 00:04:25.019 CXX test/cpp_headers/thread.o 00:04:25.019 CXX test/cpp_headers/trace.o 00:04:25.279 CXX test/cpp_headers/trace_parser.o 00:04:25.279 CXX test/cpp_headers/tree.o 00:04:25.279 CXX test/cpp_headers/ublk.o 00:04:25.279 CXX test/cpp_headers/util.o 00:04:25.279 CXX test/cpp_headers/uuid.o 00:04:25.279 CXX test/cpp_headers/version.o 00:04:25.279 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.279 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.279 CXX test/cpp_headers/vhost.o 00:04:25.279 CXX test/cpp_headers/vmd.o 00:04:25.279 CXX test/cpp_headers/xor.o 00:04:25.279 CXX test/cpp_headers/zipf.o 00:04:25.279 LINK cuse 00:04:26.657 LINK esnap 00:04:26.657 00:04:26.657 real 0m48.814s 00:04:26.657 user 4m17.873s 00:04:26.657 sys 1m7.964s 00:04:26.657 17:50:43 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:26.657 17:50:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.657 ************************************ 00:04:26.657 END TEST make 00:04:26.657 ************************************ 00:04:26.916 17:50:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.916 17:50:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.916 17:50:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.916 17:50:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.916 17:50:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.916 17:50:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.916 17:50:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.916 17:50:43 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.916 17:50:43 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.916 17:50:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.916 17:50:43 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.916 17:50:43 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.916 17:50:43 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.916 17:50:43 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.916 17:50:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.916 17:50:43 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.916 17:50:43 -- scripts/common.sh@344 -- # : 1 00:04:26.916 17:50:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.916 17:50:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.916 17:50:43 -- scripts/common.sh@364 -- # decimal 1 00:04:26.916 17:50:43 -- scripts/common.sh@352 -- # local d=1 00:04:26.916 17:50:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.916 17:50:43 -- scripts/common.sh@354 -- # echo 1 00:04:26.916 17:50:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.916 17:50:43 -- scripts/common.sh@365 -- # decimal 2 00:04:26.916 17:50:43 -- scripts/common.sh@352 -- # local d=2 00:04:26.916 17:50:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.916 17:50:43 -- scripts/common.sh@354 -- # echo 2 00:04:26.916 17:50:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.916 17:50:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.916 17:50:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.916 17:50:43 -- scripts/common.sh@367 -- # return 0 00:04:26.916 17:50:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.916 17:50:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 17:50:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 17:50:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 17:50:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.916 --rc genhtml_branch_coverage=1 00:04:26.916 --rc genhtml_function_coverage=1 00:04:26.916 --rc genhtml_legend=1 00:04:26.916 --rc geninfo_all_blocks=1 00:04:26.916 --rc geninfo_unexecuted_blocks=1 00:04:26.916 00:04:26.916 ' 00:04:26.916 17:50:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.916 17:50:43 -- nvmf/common.sh@7 -- # uname -s 00:04:26.916 17:50:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.916 17:50:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.916 17:50:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.916 17:50:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.916 17:50:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.916 17:50:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.916 17:50:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.916 17:50:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.916 17:50:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.916 17:50:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.916 17:50:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:04:26.916 17:50:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:04:26.916 17:50:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.917 17:50:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.917 17:50:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.917 17:50:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.176 17:50:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.176 17:50:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.176 17:50:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.176 17:50:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 17:50:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 17:50:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 17:50:43 -- paths/export.sh@5 -- # export PATH 00:04:27.176 17:50:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.176 17:50:43 -- nvmf/common.sh@46 -- # : 0 00:04:27.176 17:50:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:27.176 17:50:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:27.176 17:50:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:27.176 17:50:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.176 17:50:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.176 17:50:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:27.176 17:50:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:27.176 17:50:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:27.176 17:50:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.176 17:50:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.176 17:50:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.176 17:50:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.176 17:50:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.176 17:50:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.176 17:50:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.176 17:50:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.176 17:50:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.176 17:50:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.176 17:50:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.176 17:50:43 -- spdk/autotest.sh@48 -- # udevadm_pid=60501 00:04:27.176 17:50:43 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:27.176 17:50:43 -- spdk/autotest.sh@54 -- # echo 60521 00:04:27.176 17:50:43 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:27.176 17:50:43 -- spdk/autotest.sh@56 -- # echo 60526 00:04:27.176 17:50:43 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:27.176 17:50:43 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:27.176 17:50:43 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:27.176 17:50:43 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:27.176 17:50:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.176 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.176 17:50:43 -- spdk/autotest.sh@70 -- # create_test_list 00:04:27.176 17:50:43 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:27.176 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:27.176 17:50:43 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:27.176 17:50:43 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:27.176 17:50:43 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:27.176 17:50:43 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:27.176 17:50:43 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:27.176 17:50:43 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:27.176 17:50:43 -- common/autotest_common.sh@1450 -- # uname 00:04:27.176 17:50:43 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:27.176 17:50:43 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:27.176 17:50:43 -- common/autotest_common.sh@1470 -- # uname 00:04:27.176 17:50:43 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:27.176 17:50:43 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:27.176 17:50:43 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:27.176 lcov: LCOV version 1.15 00:04:27.176 17:50:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:35.297 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:35.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:35.297 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:35.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:35.297 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:35.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:57.225 17:51:10 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:57.225 17:51:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.225 17:51:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.225 17:51:10 -- spdk/autotest.sh@89 -- # rm -f 00:04:57.225 17:51:10 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.225 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:57.225 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:57.225 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:57.225 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:57.225 17:51:12 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:57.225 17:51:12 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:57.225 17:51:12 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:57.225 17:51:12 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.225 17:51:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:57.225 17:51:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:57.225 17:51:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.225 17:51:12 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:57.225 17:51:12 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:57.225 17:51:12 -- spdk/autotest.sh@108 -- # grep -v p 00:04:57.225 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.225 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.225 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:57.225 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:57.225 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:57.225 No valid GPT data, bailing 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.225 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.225 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:57.225 1+0 records in 00:04:57.225 1+0 records out 00:04:57.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166742 s, 62.9 MB/s 00:04:57.225 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.225 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.225 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:57.225 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:57.225 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:57.225 No valid GPT data, bailing 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.225 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.225 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:57.225 1+0 records in 00:04:57.225 1+0 records out 00:04:57.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430893 s, 243 MB/s 00:04:57.225 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.225 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.225 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:57.225 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:57.225 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:57.225 No valid GPT data, bailing 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:57.225 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.226 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.226 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:57.226 1+0 records in 00:04:57.226 1+0 records out 00:04:57.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513162 s, 204 MB/s 00:04:57.226 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.226 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.226 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:04:57.226 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:57.226 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:57.226 No valid GPT data, bailing 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.226 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.226 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:57.226 1+0 records in 00:04:57.226 1+0 records out 00:04:57.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062751 s, 167 MB/s 00:04:57.226 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.226 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.226 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:04:57.226 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:57.226 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:57.226 No valid GPT data, bailing 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.226 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.226 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:57.226 1+0 records in 00:04:57.226 1+0 records out 00:04:57.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605345 s, 173 MB/s 00:04:57.226 17:51:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:57.226 17:51:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:57.226 17:51:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:57.226 17:51:12 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:57.226 17:51:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:57.226 No valid GPT data, bailing 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:57.226 17:51:12 -- scripts/common.sh@393 -- # pt= 00:04:57.226 17:51:12 -- scripts/common.sh@394 -- # return 1 00:04:57.226 17:51:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:57.226 1+0 records in 00:04:57.226 1+0 records out 00:04:57.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436989 s, 240 MB/s 00:04:57.226 17:51:12 -- spdk/autotest.sh@116 -- # sync 00:04:57.226 17:51:13 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.226 17:51:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.226 17:51:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:59.135 17:51:16 -- spdk/autotest.sh@122 -- # uname -s 00:04:59.135 17:51:16 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:59.135 17:51:16 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:59.135 17:51:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.135 17:51:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.135 17:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:59.395 ************************************ 00:04:59.395 START TEST setup.sh 00:04:59.395 ************************************ 00:04:59.395 17:51:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:59.395 * Looking for test storage... 00:04:59.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.395 17:51:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.395 17:51:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.395 17:51:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.395 17:51:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.395 17:51:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.395 17:51:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.395 17:51:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.395 17:51:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.395 17:51:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.395 17:51:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.395 17:51:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.395 17:51:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.395 17:51:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.395 17:51:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.395 17:51:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.395 17:51:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.395 17:51:16 -- scripts/common.sh@344 -- # : 1 00:04:59.395 17:51:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.395 17:51:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.395 17:51:16 -- scripts/common.sh@364 -- # decimal 1 00:04:59.395 17:51:16 -- scripts/common.sh@352 -- # local d=1 00:04:59.395 17:51:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.395 17:51:16 -- scripts/common.sh@354 -- # echo 1 00:04:59.395 17:51:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.395 17:51:16 -- scripts/common.sh@365 -- # decimal 2 00:04:59.395 17:51:16 -- scripts/common.sh@352 -- # local d=2 00:04:59.395 17:51:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.395 17:51:16 -- scripts/common.sh@354 -- # echo 2 00:04:59.395 17:51:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.395 17:51:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.395 17:51:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.395 17:51:16 -- scripts/common.sh@367 -- # return 0 00:04:59.395 17:51:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.395 17:51:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.395 --rc genhtml_branch_coverage=1 00:04:59.395 --rc genhtml_function_coverage=1 00:04:59.395 --rc genhtml_legend=1 00:04:59.395 --rc geninfo_all_blocks=1 00:04:59.395 --rc geninfo_unexecuted_blocks=1 00:04:59.395 00:04:59.395 ' 00:04:59.395 17:51:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.395 --rc genhtml_branch_coverage=1 00:04:59.395 --rc genhtml_function_coverage=1 00:04:59.395 --rc genhtml_legend=1 00:04:59.395 --rc geninfo_all_blocks=1 00:04:59.395 --rc geninfo_unexecuted_blocks=1 00:04:59.395 00:04:59.395 ' 00:04:59.395 17:51:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.395 --rc genhtml_branch_coverage=1 00:04:59.395 --rc genhtml_function_coverage=1 00:04:59.395 --rc genhtml_legend=1 00:04:59.395 --rc geninfo_all_blocks=1 00:04:59.395 --rc geninfo_unexecuted_blocks=1 00:04:59.395 00:04:59.395 ' 00:04:59.395 17:51:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.395 --rc genhtml_branch_coverage=1 00:04:59.395 --rc genhtml_function_coverage=1 00:04:59.395 --rc genhtml_legend=1 00:04:59.395 --rc geninfo_all_blocks=1 00:04:59.395 --rc geninfo_unexecuted_blocks=1 00:04:59.395 00:04:59.395 ' 00:04:59.395 17:51:16 -- setup/test-setup.sh@10 -- # uname -s 00:04:59.655 17:51:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:59.655 17:51:16 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:59.655 17:51:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.655 17:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 ************************************ 00:04:59.655 START TEST acl 00:04:59.655 ************************************ 00:04:59.655 17:51:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:59.655 * Looking for test storage... 00:04:59.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.655 17:51:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.655 17:51:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.655 17:51:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.655 17:51:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.655 17:51:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.655 17:51:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.655 17:51:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.655 17:51:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.655 17:51:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.655 17:51:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.655 17:51:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.655 17:51:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.655 17:51:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.655 17:51:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.655 17:51:16 -- scripts/common.sh@344 -- # : 1 00:04:59.655 17:51:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.655 17:51:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.655 17:51:16 -- scripts/common.sh@364 -- # decimal 1 00:04:59.655 17:51:16 -- scripts/common.sh@352 -- # local d=1 00:04:59.655 17:51:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.655 17:51:16 -- scripts/common.sh@354 -- # echo 1 00:04:59.655 17:51:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.655 17:51:16 -- scripts/common.sh@365 -- # decimal 2 00:04:59.655 17:51:16 -- scripts/common.sh@352 -- # local d=2 00:04:59.655 17:51:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.655 17:51:16 -- scripts/common.sh@354 -- # echo 2 00:04:59.655 17:51:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.655 17:51:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.655 17:51:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.655 17:51:16 -- scripts/common.sh@367 -- # return 0 00:04:59.655 17:51:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.655 --rc genhtml_branch_coverage=1 00:04:59.655 --rc genhtml_function_coverage=1 00:04:59.655 --rc genhtml_legend=1 00:04:59.655 --rc geninfo_all_blocks=1 00:04:59.655 --rc geninfo_unexecuted_blocks=1 00:04:59.655 00:04:59.655 ' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.655 --rc genhtml_branch_coverage=1 00:04:59.655 --rc genhtml_function_coverage=1 00:04:59.655 --rc genhtml_legend=1 00:04:59.655 --rc geninfo_all_blocks=1 00:04:59.655 --rc geninfo_unexecuted_blocks=1 00:04:59.655 00:04:59.655 ' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.655 --rc genhtml_branch_coverage=1 00:04:59.655 --rc genhtml_function_coverage=1 00:04:59.655 --rc genhtml_legend=1 00:04:59.655 --rc geninfo_all_blocks=1 00:04:59.655 --rc geninfo_unexecuted_blocks=1 00:04:59.655 00:04:59.655 ' 00:04:59.655 17:51:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.655 --rc genhtml_branch_coverage=1 00:04:59.655 --rc genhtml_function_coverage=1 00:04:59.655 --rc genhtml_legend=1 00:04:59.655 --rc geninfo_all_blocks=1 00:04:59.655 --rc geninfo_unexecuted_blocks=1 00:04:59.655 00:04:59.655 ' 00:04:59.655 17:51:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:59.655 17:51:16 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:59.655 17:51:16 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:59.655 17:51:16 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:59.655 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:04:59.655 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:04:59.655 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:04:59.655 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:04:59.655 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.655 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.655 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:59.656 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:59.656 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:59.656 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.656 17:51:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:59.656 17:51:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:59.656 17:51:16 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:59.656 17:51:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:59.656 17:51:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:59.656 17:51:16 -- setup/acl.sh@12 -- # devs=() 00:04:59.656 17:51:16 -- setup/acl.sh@12 -- # declare -a devs 00:04:59.656 17:51:16 -- setup/acl.sh@13 -- # drivers=() 00:04:59.656 17:51:16 -- setup/acl.sh@13 -- # declare -A drivers 00:04:59.656 17:51:16 -- setup/acl.sh@51 -- # setup reset 00:04:59.656 17:51:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.656 17:51:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.560 17:51:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:01.560 17:51:18 -- setup/acl.sh@16 -- # local dev driver 00:05:01.560 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.560 17:51:18 -- setup/acl.sh@15 -- # setup output status 00:05:01.560 17:51:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.560 17:51:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.560 Hugepages 00:05:01.560 node hugesize free / total 00:05:01.560 17:51:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:01.561 17:51:18 -- setup/acl.sh@19 -- # continue 00:05:01.561 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.561 00:05:01.561 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.561 17:51:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:01.561 17:51:18 -- setup/acl.sh@19 -- # continue 00:05:01.561 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.820 17:51:18 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:01.820 17:51:18 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:01.820 17:51:18 -- setup/acl.sh@20 -- # continue 00:05:01.820 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.820 17:51:18 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:01.820 17:51:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:01.820 17:51:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:01.820 17:51:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:01.820 17:51:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:01.820 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.820 17:51:18 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:01.820 17:51:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:01.820 17:51:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:01.820 17:51:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:01.820 17:51:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:01.820 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.079 17:51:18 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:05:02.079 17:51:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:02.079 17:51:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:02.079 17:51:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:02.079 17:51:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:02.079 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.079 17:51:18 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:05:02.079 17:51:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:02.079 17:51:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:02.079 17:51:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:02.079 17:51:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:02.079 17:51:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:02.079 17:51:18 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:02.079 17:51:18 -- setup/acl.sh@54 -- # run_test denied denied 00:05:02.079 17:51:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.079 17:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.079 17:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:02.079 ************************************ 00:05:02.079 START TEST denied 00:05:02.079 ************************************ 00:05:02.079 17:51:18 -- common/autotest_common.sh@1114 -- # denied 00:05:02.079 17:51:18 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:02.079 17:51:18 -- setup/acl.sh@38 -- # setup output config 00:05:02.079 17:51:18 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:02.079 17:51:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.079 17:51:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.983 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:03.983 17:51:20 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:03.983 17:51:20 -- setup/acl.sh@28 -- # local dev driver 00:05:03.983 17:51:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:03.983 17:51:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:03.983 17:51:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:03.983 17:51:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:03.983 17:51:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:03.983 17:51:20 -- setup/acl.sh@41 -- # setup reset 00:05:03.983 17:51:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.983 17:51:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.583 00:05:10.583 real 0m8.011s 00:05:10.583 user 0m1.097s 00:05:10.583 sys 0m2.075s 00:05:10.583 17:51:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.583 ************************************ 00:05:10.583 END TEST denied 00:05:10.583 ************************************ 00:05:10.583 17:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.583 17:51:27 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:10.583 17:51:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.583 17:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.583 17:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:10.583 ************************************ 00:05:10.583 START TEST allowed 00:05:10.583 ************************************ 00:05:10.583 17:51:27 -- common/autotest_common.sh@1114 -- # allowed 00:05:10.584 17:51:27 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:10.584 17:51:27 -- setup/acl.sh@45 -- # setup output config 00:05:10.584 17:51:27 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:10.584 17:51:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.584 17:51:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.957 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.957 17:51:28 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:11.957 17:51:28 -- setup/acl.sh@28 -- # local dev driver 00:05:11.957 17:51:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.957 17:51:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.957 17:51:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.957 17:51:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.957 17:51:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.957 17:51:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.957 17:51:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.957 17:51:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:05:11.957 17:51:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.957 17:51:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.957 17:51:28 -- setup/acl.sh@48 -- # setup reset 00:05:11.957 17:51:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.957 17:51:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.330 00:05:13.330 real 0m2.957s 00:05:13.330 user 0m1.217s 00:05:13.330 sys 0m1.792s 00:05:13.330 17:51:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.330 ************************************ 00:05:13.330 END TEST allowed 00:05:13.330 ************************************ 00:05:13.330 17:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:13.330 ************************************ 00:05:13.330 END TEST acl 00:05:13.330 ************************************ 00:05:13.330 00:05:13.330 real 0m13.736s 00:05:13.330 user 0m3.446s 00:05:13.330 sys 0m5.566s 00:05:13.330 17:51:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.330 17:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:13.330 17:51:30 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:13.330 17:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.330 17:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.330 17:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:13.330 ************************************ 00:05:13.330 START TEST hugepages 00:05:13.330 ************************************ 00:05:13.330 17:51:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:13.590 * Looking for test storage... 00:05:13.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.590 17:51:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:13.590 17:51:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:13.590 17:51:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:13.590 17:51:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:13.590 17:51:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:13.590 17:51:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:13.590 17:51:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:13.590 17:51:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:13.590 17:51:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:13.590 17:51:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.590 17:51:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:13.590 17:51:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:13.590 17:51:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:13.590 17:51:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:13.590 17:51:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:13.590 17:51:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:13.590 17:51:30 -- scripts/common.sh@344 -- # : 1 00:05:13.590 17:51:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:13.590 17:51:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.590 17:51:30 -- scripts/common.sh@364 -- # decimal 1 00:05:13.590 17:51:30 -- scripts/common.sh@352 -- # local d=1 00:05:13.590 17:51:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.590 17:51:30 -- scripts/common.sh@354 -- # echo 1 00:05:13.590 17:51:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:13.590 17:51:30 -- scripts/common.sh@365 -- # decimal 2 00:05:13.590 17:51:30 -- scripts/common.sh@352 -- # local d=2 00:05:13.590 17:51:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.590 17:51:30 -- scripts/common.sh@354 -- # echo 2 00:05:13.590 17:51:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:13.590 17:51:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:13.590 17:51:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:13.590 17:51:30 -- scripts/common.sh@367 -- # return 0 00:05:13.590 17:51:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.590 17:51:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:13.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.590 --rc genhtml_branch_coverage=1 00:05:13.590 --rc genhtml_function_coverage=1 00:05:13.590 --rc genhtml_legend=1 00:05:13.590 --rc geninfo_all_blocks=1 00:05:13.590 --rc geninfo_unexecuted_blocks=1 00:05:13.590 00:05:13.590 ' 00:05:13.590 17:51:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:13.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.590 --rc genhtml_branch_coverage=1 00:05:13.590 --rc genhtml_function_coverage=1 00:05:13.590 --rc genhtml_legend=1 00:05:13.590 --rc geninfo_all_blocks=1 00:05:13.590 --rc geninfo_unexecuted_blocks=1 00:05:13.590 00:05:13.590 ' 00:05:13.590 17:51:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:13.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.590 --rc genhtml_branch_coverage=1 00:05:13.590 --rc genhtml_function_coverage=1 00:05:13.590 --rc genhtml_legend=1 00:05:13.590 --rc geninfo_all_blocks=1 00:05:13.590 --rc geninfo_unexecuted_blocks=1 00:05:13.590 00:05:13.590 ' 00:05:13.590 17:51:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:13.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.590 --rc genhtml_branch_coverage=1 00:05:13.590 --rc genhtml_function_coverage=1 00:05:13.590 --rc genhtml_legend=1 00:05:13.590 --rc geninfo_all_blocks=1 00:05:13.590 --rc geninfo_unexecuted_blocks=1 00:05:13.590 00:05:13.590 ' 00:05:13.590 17:51:30 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:13.590 17:51:30 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:13.590 17:51:30 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:13.590 17:51:30 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:13.590 17:51:30 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:13.590 17:51:30 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:13.590 17:51:30 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:13.590 17:51:30 -- setup/common.sh@18 -- # local node= 00:05:13.590 17:51:30 -- setup/common.sh@19 -- # local var val 00:05:13.590 17:51:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.590 17:51:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.590 17:51:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.590 17:51:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.590 17:51:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.590 17:51:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.590 17:51:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4676896 kB' 'MemAvailable: 7345956 kB' 'Buffers: 3456 kB' 'Cached: 2871820 kB' 'SwapCached: 0 kB' 'Active: 465620 kB' 'Inactive: 2525520 kB' 'Active(anon): 126400 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117552 kB' 'Mapped: 50976 kB' 'Shmem: 10536 kB' 'KReclaimable: 82532 kB' 'Slab: 187200 kB' 'SReclaimable: 82532 kB' 'SUnreclaim: 104668 kB' 'KernelStack: 6912 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 311724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:13.590 17:51:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.590 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.590 17:51:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.590 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.590 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.591 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.591 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # continue 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.592 17:51:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.592 17:51:30 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.592 17:51:30 -- setup/common.sh@33 -- # echo 2048 00:05:13.592 17:51:30 -- setup/common.sh@33 -- # return 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:13.592 17:51:30 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:13.592 17:51:30 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:13.592 17:51:30 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:13.592 17:51:30 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:13.592 17:51:30 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:13.592 17:51:30 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:13.592 17:51:30 -- setup/hugepages.sh@207 -- # get_nodes 00:05:13.592 17:51:30 -- setup/hugepages.sh@27 -- # local node 00:05:13.592 17:51:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.592 17:51:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:13.592 17:51:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.592 17:51:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.592 17:51:30 -- setup/hugepages.sh@208 -- # clear_hp 00:05:13.592 17:51:30 -- setup/hugepages.sh@37 -- # local node hp 00:05:13.592 17:51:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:13.592 17:51:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.592 17:51:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:13.592 17:51:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:13.592 17:51:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:13.592 17:51:30 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:13.592 17:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.592 17:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.592 17:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:13.592 ************************************ 00:05:13.592 START TEST default_setup 00:05:13.592 ************************************ 00:05:13.592 17:51:30 -- common/autotest_common.sh@1114 -- # default_setup 00:05:13.592 17:51:30 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.592 17:51:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.592 17:51:30 -- setup/hugepages.sh@51 -- # shift 00:05:13.592 17:51:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.592 17:51:30 -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.592 17:51:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.592 17:51:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.592 17:51:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.592 17:51:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.592 17:51:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.592 17:51:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.592 17:51:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.592 17:51:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.592 17:51:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.592 17:51:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.592 17:51:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.592 17:51:30 -- setup/hugepages.sh@73 -- # return 0 00:05:13.592 17:51:30 -- setup/hugepages.sh@137 -- # setup output 00:05:13.592 17:51:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.592 17:51:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.227 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.227 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.227 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.227 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.227 17:51:32 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:15.227 17:51:32 -- setup/hugepages.sh@89 -- # local node 00:05:15.227 17:51:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.227 17:51:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.227 17:51:32 -- setup/hugepages.sh@92 -- # local surp 00:05:15.227 17:51:32 -- setup/hugepages.sh@93 -- # local resv 00:05:15.227 17:51:32 -- setup/hugepages.sh@94 -- # local anon 00:05:15.227 17:51:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.228 17:51:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.228 17:51:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.228 17:51:32 -- setup/common.sh@18 -- # local node= 00:05:15.228 17:51:32 -- setup/common.sh@19 -- # local var val 00:05:15.228 17:51:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.228 17:51:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.228 17:51:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.228 17:51:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.228 17:51:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.228 17:51:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782308 kB' 'MemAvailable: 9451124 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 468216 kB' 'Inactive: 2525540 kB' 'Active(anon): 128996 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 51032 kB' 'Shmem: 10500 kB' 'KReclaimable: 82004 kB' 'Slab: 186840 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104836 kB' 'KernelStack: 6928 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56020 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.228 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.228 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.490 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.490 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.491 17:51:32 -- setup/common.sh@33 -- # echo 0 00:05:15.491 17:51:32 -- setup/common.sh@33 -- # return 0 00:05:15.491 17:51:32 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.491 17:51:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.491 17:51:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.491 17:51:32 -- setup/common.sh@18 -- # local node= 00:05:15.491 17:51:32 -- setup/common.sh@19 -- # local var val 00:05:15.491 17:51:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.491 17:51:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.491 17:51:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.491 17:51:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.491 17:51:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.491 17:51:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782576 kB' 'MemAvailable: 9451392 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 467992 kB' 'Inactive: 2525540 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50908 kB' 'Shmem: 10500 kB' 'KReclaimable: 82004 kB' 'Slab: 186840 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104836 kB' 'KernelStack: 6960 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56004 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.491 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.491 17:51:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.492 17:51:32 -- setup/common.sh@33 -- # echo 0 00:05:15.492 17:51:32 -- setup/common.sh@33 -- # return 0 00:05:15.492 17:51:32 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.492 17:51:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.492 17:51:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.492 17:51:32 -- setup/common.sh@18 -- # local node= 00:05:15.492 17:51:32 -- setup/common.sh@19 -- # local var val 00:05:15.492 17:51:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.492 17:51:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.492 17:51:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.492 17:51:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.492 17:51:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.492 17:51:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.492 17:51:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782576 kB' 'MemAvailable: 9451400 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 468140 kB' 'Inactive: 2525548 kB' 'Active(anon): 128920 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 50960 kB' 'Shmem: 10500 kB' 'KReclaimable: 82004 kB' 'Slab: 186844 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104840 kB' 'KernelStack: 6944 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56020 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.492 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.492 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.493 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.493 17:51:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.494 17:51:32 -- setup/common.sh@33 -- # echo 0 00:05:15.494 17:51:32 -- setup/common.sh@33 -- # return 0 00:05:15.494 17:51:32 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.494 17:51:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.494 nr_hugepages=1024 00:05:15.494 resv_hugepages=0 00:05:15.494 17:51:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.494 surplus_hugepages=0 00:05:15.494 17:51:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.494 17:51:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.494 anon_hugepages=0 00:05:15.494 17:51:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.494 17:51:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.494 17:51:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.494 17:51:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.494 17:51:32 -- setup/common.sh@18 -- # local node= 00:05:15.494 17:51:32 -- setup/common.sh@19 -- # local var val 00:05:15.494 17:51:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.494 17:51:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.494 17:51:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.494 17:51:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.494 17:51:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.494 17:51:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782324 kB' 'MemAvailable: 9451148 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 467648 kB' 'Inactive: 2525548 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119548 kB' 'Mapped: 50908 kB' 'Shmem: 10500 kB' 'KReclaimable: 82004 kB' 'Slab: 186852 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104848 kB' 'KernelStack: 6912 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56004 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.494 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.494 17:51:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.495 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.495 17:51:32 -- setup/common.sh@33 -- # echo 1024 00:05:15.495 17:51:32 -- setup/common.sh@33 -- # return 0 00:05:15.495 17:51:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.495 17:51:32 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.495 17:51:32 -- setup/hugepages.sh@27 -- # local node 00:05:15.495 17:51:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.495 17:51:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.495 17:51:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.495 17:51:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.495 17:51:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.495 17:51:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.495 17:51:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.495 17:51:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.495 17:51:32 -- setup/common.sh@18 -- # local node=0 00:05:15.495 17:51:32 -- setup/common.sh@19 -- # local var val 00:05:15.495 17:51:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.495 17:51:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.495 17:51:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.495 17:51:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.495 17:51:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.495 17:51:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.495 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782324 kB' 'MemUsed: 5456792 kB' 'SwapCached: 0 kB' 'Active: 467908 kB' 'Inactive: 2525548 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 2875268 kB' 'Mapped: 50908 kB' 'AnonPages: 119808 kB' 'Shmem: 10500 kB' 'KernelStack: 6912 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82004 kB' 'Slab: 186852 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.496 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.496 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # continue 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.497 17:51:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.497 17:51:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.497 17:51:32 -- setup/common.sh@33 -- # echo 0 00:05:15.497 17:51:32 -- setup/common.sh@33 -- # return 0 00:05:15.497 17:51:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.497 17:51:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.497 17:51:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.497 17:51:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.497 17:51:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.497 node0=1024 expecting 1024 00:05:15.497 17:51:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.497 00:05:15.497 real 0m1.847s 00:05:15.497 user 0m0.722s 00:05:15.497 sys 0m1.111s 00:05:15.497 17:51:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.497 17:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:15.497 ************************************ 00:05:15.497 END TEST default_setup 00:05:15.497 ************************************ 00:05:15.497 17:51:32 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:15.497 17:51:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.497 17:51:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.497 17:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:15.497 ************************************ 00:05:15.497 START TEST per_node_1G_alloc 00:05:15.497 ************************************ 00:05:15.497 17:51:32 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:15.497 17:51:32 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:15.497 17:51:32 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:15.497 17:51:32 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:15.497 17:51:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.497 17:51:32 -- setup/hugepages.sh@51 -- # shift 00:05:15.497 17:51:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.497 17:51:32 -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.497 17:51:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.497 17:51:32 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:15.497 17:51:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.497 17:51:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.497 17:51:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.497 17:51:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.497 17:51:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.497 17:51:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.497 17:51:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.497 17:51:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.497 17:51:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.497 17:51:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:15.497 17:51:32 -- setup/hugepages.sh@73 -- # return 0 00:05:15.497 17:51:32 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:15.497 17:51:32 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:15.497 17:51:32 -- setup/hugepages.sh@146 -- # setup output 00:05:15.497 17:51:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.497 17:51:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.448 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.448 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.448 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.448 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.448 17:51:33 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:16.448 17:51:33 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:16.448 17:51:33 -- setup/hugepages.sh@89 -- # local node 00:05:16.448 17:51:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.448 17:51:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.448 17:51:33 -- setup/hugepages.sh@92 -- # local surp 00:05:16.448 17:51:33 -- setup/hugepages.sh@93 -- # local resv 00:05:16.448 17:51:33 -- setup/hugepages.sh@94 -- # local anon 00:05:16.448 17:51:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.448 17:51:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.448 17:51:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.448 17:51:33 -- setup/common.sh@18 -- # local node= 00:05:16.448 17:51:33 -- setup/common.sh@19 -- # local var val 00:05:16.448 17:51:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.448 17:51:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.448 17:51:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.448 17:51:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.448 17:51:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.448 17:51:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7834824 kB' 'MemAvailable: 10503656 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 468028 kB' 'Inactive: 2525548 kB' 'Active(anon): 128808 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50988 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186980 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104960 kB' 'KernelStack: 6928 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56052 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.448 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.448 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.449 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.449 17:51:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.450 17:51:33 -- setup/common.sh@33 -- # echo 0 00:05:16.450 17:51:33 -- setup/common.sh@33 -- # return 0 00:05:16.450 17:51:33 -- setup/hugepages.sh@97 -- # anon=0 00:05:16.450 17:51:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.450 17:51:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.450 17:51:33 -- setup/common.sh@18 -- # local node= 00:05:16.450 17:51:33 -- setup/common.sh@19 -- # local var val 00:05:16.450 17:51:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.450 17:51:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.450 17:51:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.450 17:51:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.450 17:51:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.450 17:51:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.450 17:51:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7834824 kB' 'MemAvailable: 10503656 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 468040 kB' 'Inactive: 2525548 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 51040 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186968 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104948 kB' 'KernelStack: 6912 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56036 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.450 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.450 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.451 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.451 17:51:33 -- setup/common.sh@33 -- # echo 0 00:05:16.451 17:51:33 -- setup/common.sh@33 -- # return 0 00:05:16.451 17:51:33 -- setup/hugepages.sh@99 -- # surp=0 00:05:16.451 17:51:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.451 17:51:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.451 17:51:33 -- setup/common.sh@18 -- # local node= 00:05:16.451 17:51:33 -- setup/common.sh@19 -- # local var val 00:05:16.451 17:51:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.451 17:51:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.451 17:51:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.451 17:51:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.451 17:51:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.451 17:51:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.451 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7834824 kB' 'MemAvailable: 10503656 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467888 kB' 'Inactive: 2525548 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50988 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186960 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104940 kB' 'KernelStack: 6880 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56020 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.452 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.452 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.453 17:51:33 -- setup/common.sh@33 -- # echo 0 00:05:16.453 17:51:33 -- setup/common.sh@33 -- # return 0 00:05:16.453 17:51:33 -- setup/hugepages.sh@100 -- # resv=0 00:05:16.453 17:51:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:16.453 nr_hugepages=512 00:05:16.453 resv_hugepages=0 00:05:16.453 17:51:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.453 surplus_hugepages=0 00:05:16.453 17:51:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.453 anon_hugepages=0 00:05:16.453 17:51:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.453 17:51:33 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.453 17:51:33 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:16.453 17:51:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.453 17:51:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.453 17:51:33 -- setup/common.sh@18 -- # local node= 00:05:16.453 17:51:33 -- setup/common.sh@19 -- # local var val 00:05:16.453 17:51:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.453 17:51:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.453 17:51:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.453 17:51:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.453 17:51:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.453 17:51:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7835344 kB' 'MemAvailable: 10504176 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467888 kB' 'Inactive: 2525548 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50988 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186960 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104940 kB' 'KernelStack: 6948 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 319352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56020 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.453 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.453 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.454 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.454 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.455 17:51:33 -- setup/common.sh@33 -- # echo 512 00:05:16.455 17:51:33 -- setup/common.sh@33 -- # return 0 00:05:16.455 17:51:33 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.455 17:51:33 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.455 17:51:33 -- setup/hugepages.sh@27 -- # local node 00:05:16.455 17:51:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.455 17:51:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.455 17:51:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.455 17:51:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.455 17:51:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.455 17:51:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.455 17:51:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.455 17:51:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.455 17:51:33 -- setup/common.sh@18 -- # local node=0 00:05:16.455 17:51:33 -- setup/common.sh@19 -- # local var val 00:05:16.455 17:51:33 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.455 17:51:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.455 17:51:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.455 17:51:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.455 17:51:33 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.455 17:51:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7835352 kB' 'MemUsed: 4403764 kB' 'SwapCached: 0 kB' 'Active: 467756 kB' 'Inactive: 2525548 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'FilePages: 2875264 kB' 'Mapped: 50856 kB' 'AnonPages: 119644 kB' 'Shmem: 10496 kB' 'KernelStack: 6944 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82020 kB' 'Slab: 186956 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.455 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.455 17:51:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # continue 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 17:51:33 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 17:51:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.456 17:51:33 -- setup/common.sh@33 -- # echo 0 00:05:16.456 17:51:33 -- setup/common.sh@33 -- # return 0 00:05:16.456 17:51:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.456 17:51:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.456 17:51:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.456 17:51:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.456 17:51:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:16.456 node0=512 expecting 512 00:05:16.456 17:51:33 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:16.456 00:05:16.456 real 0m0.991s 00:05:16.456 user 0m0.418s 00:05:16.456 sys 0m0.641s 00:05:16.456 17:51:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.456 17:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.457 ************************************ 00:05:16.457 END TEST per_node_1G_alloc 00:05:16.457 ************************************ 00:05:16.715 17:51:33 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:16.715 17:51:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.715 17:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.715 17:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.715 ************************************ 00:05:16.715 START TEST even_2G_alloc 00:05:16.715 ************************************ 00:05:16.715 17:51:33 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:16.715 17:51:33 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:16.715 17:51:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.715 17:51:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.715 17:51:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.715 17:51:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.715 17:51:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.715 17:51:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.715 17:51:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.715 17:51:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.715 17:51:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.715 17:51:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.715 17:51:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.715 17:51:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.715 17:51:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.715 17:51:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.715 17:51:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:16.715 17:51:33 -- setup/hugepages.sh@83 -- # : 0 00:05:16.715 17:51:33 -- setup/hugepages.sh@84 -- # : 0 00:05:16.715 17:51:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.716 17:51:33 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:16.716 17:51:33 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:16.716 17:51:33 -- setup/hugepages.sh@153 -- # setup output 00:05:16.716 17:51:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.716 17:51:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.284 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.284 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.284 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.284 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.549 17:51:34 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:17.549 17:51:34 -- setup/hugepages.sh@89 -- # local node 00:05:17.549 17:51:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.549 17:51:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.549 17:51:34 -- setup/hugepages.sh@92 -- # local surp 00:05:17.549 17:51:34 -- setup/hugepages.sh@93 -- # local resv 00:05:17.549 17:51:34 -- setup/hugepages.sh@94 -- # local anon 00:05:17.549 17:51:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.549 17:51:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.549 17:51:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.549 17:51:34 -- setup/common.sh@18 -- # local node= 00:05:17.549 17:51:34 -- setup/common.sh@19 -- # local var val 00:05:17.549 17:51:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.549 17:51:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.549 17:51:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.549 17:51:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.549 17:51:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.549 17:51:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.549 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.549 17:51:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6784108 kB' 'MemAvailable: 9452940 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467864 kB' 'Inactive: 2525548 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 51208 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186948 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104928 kB' 'KernelStack: 6940 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56036 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:17.549 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.549 17:51:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.549 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.549 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.549 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.550 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.550 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.551 17:51:34 -- setup/common.sh@33 -- # echo 0 00:05:17.551 17:51:34 -- setup/common.sh@33 -- # return 0 00:05:17.551 17:51:34 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.551 17:51:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.551 17:51:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.551 17:51:34 -- setup/common.sh@18 -- # local node= 00:05:17.551 17:51:34 -- setup/common.sh@19 -- # local var val 00:05:17.551 17:51:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.551 17:51:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.551 17:51:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.551 17:51:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.551 17:51:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.551 17:51:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6784108 kB' 'MemAvailable: 9452940 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467872 kB' 'Inactive: 2525548 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 50856 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186968 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104948 kB' 'KernelStack: 6960 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56004 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.551 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.551 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.552 17:51:34 -- setup/common.sh@33 -- # echo 0 00:05:17.552 17:51:34 -- setup/common.sh@33 -- # return 0 00:05:17.552 17:51:34 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.552 17:51:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.552 17:51:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.552 17:51:34 -- setup/common.sh@18 -- # local node= 00:05:17.552 17:51:34 -- setup/common.sh@19 -- # local var val 00:05:17.552 17:51:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.552 17:51:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.552 17:51:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.552 17:51:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.552 17:51:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.552 17:51:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.552 17:51:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6784108 kB' 'MemAvailable: 9452940 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467844 kB' 'Inactive: 2525548 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119768 kB' 'Mapped: 50908 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186964 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104944 kB' 'KernelStack: 6928 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55988 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.552 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.552 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.553 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.553 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.554 17:51:34 -- setup/common.sh@33 -- # echo 0 00:05:17.554 17:51:34 -- setup/common.sh@33 -- # return 0 00:05:17.554 17:51:34 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.554 17:51:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.554 nr_hugepages=1024 00:05:17.554 resv_hugepages=0 00:05:17.554 17:51:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.554 surplus_hugepages=0 00:05:17.554 17:51:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.554 anon_hugepages=0 00:05:17.554 17:51:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.554 17:51:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.554 17:51:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.554 17:51:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.554 17:51:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.554 17:51:34 -- setup/common.sh@18 -- # local node= 00:05:17.554 17:51:34 -- setup/common.sh@19 -- # local var val 00:05:17.554 17:51:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.554 17:51:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.554 17:51:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.554 17:51:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.554 17:51:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.554 17:51:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6784108 kB' 'MemAvailable: 9452940 kB' 'Buffers: 3456 kB' 'Cached: 2871808 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 2525548 kB' 'Active(anon): 128364 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50856 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186952 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104932 kB' 'KernelStack: 6896 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56004 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.554 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.554 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.555 17:51:34 -- setup/common.sh@33 -- # echo 1024 00:05:17.555 17:51:34 -- setup/common.sh@33 -- # return 0 00:05:17.555 17:51:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.555 17:51:34 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.555 17:51:34 -- setup/hugepages.sh@27 -- # local node 00:05:17.555 17:51:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.555 17:51:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.555 17:51:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.555 17:51:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.555 17:51:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.555 17:51:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.555 17:51:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.555 17:51:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.555 17:51:34 -- setup/common.sh@18 -- # local node=0 00:05:17.555 17:51:34 -- setup/common.sh@19 -- # local var val 00:05:17.555 17:51:34 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.555 17:51:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.555 17:51:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.555 17:51:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.555 17:51:34 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.555 17:51:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6784108 kB' 'MemUsed: 5455008 kB' 'SwapCached: 0 kB' 'Active: 467584 kB' 'Inactive: 2525548 kB' 'Active(anon): 128364 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2875264 kB' 'Mapped: 50856 kB' 'AnonPages: 119524 kB' 'Shmem: 10496 kB' 'KernelStack: 6896 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82020 kB' 'Slab: 186952 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.555 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.555 17:51:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # continue 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.556 17:51:34 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.556 17:51:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.556 17:51:34 -- setup/common.sh@33 -- # echo 0 00:05:17.556 17:51:34 -- setup/common.sh@33 -- # return 0 00:05:17.556 17:51:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.556 17:51:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.556 node0=1024 expecting 1024 00:05:17.556 17:51:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.556 17:51:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.556 17:51:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.556 00:05:17.556 real 0m0.930s 00:05:17.556 user 0m0.391s 00:05:17.556 sys 0m0.607s 00:05:17.556 17:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.556 17:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:17.556 ************************************ 00:05:17.556 END TEST even_2G_alloc 00:05:17.556 ************************************ 00:05:17.556 17:51:34 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:17.556 17:51:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.556 17:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.556 17:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:17.556 ************************************ 00:05:17.556 START TEST odd_alloc 00:05:17.556 ************************************ 00:05:17.556 17:51:34 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:17.556 17:51:34 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:17.556 17:51:34 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:17.556 17:51:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:17.556 17:51:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.556 17:51:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.556 17:51:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.556 17:51:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:17.556 17:51:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.556 17:51:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.556 17:51:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.556 17:51:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.556 17:51:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:17.556 17:51:34 -- setup/hugepages.sh@83 -- # : 0 00:05:17.556 17:51:34 -- setup/hugepages.sh@84 -- # : 0 00:05:17.557 17:51:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.557 17:51:34 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:17.557 17:51:34 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:17.557 17:51:34 -- setup/hugepages.sh@160 -- # setup output 00:05:17.557 17:51:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.557 17:51:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.498 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.498 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.498 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.498 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.498 17:51:35 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:18.498 17:51:35 -- setup/hugepages.sh@89 -- # local node 00:05:18.498 17:51:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.498 17:51:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.498 17:51:35 -- setup/hugepages.sh@92 -- # local surp 00:05:18.498 17:51:35 -- setup/hugepages.sh@93 -- # local resv 00:05:18.498 17:51:35 -- setup/hugepages.sh@94 -- # local anon 00:05:18.498 17:51:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.498 17:51:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.499 17:51:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.499 17:51:35 -- setup/common.sh@18 -- # local node= 00:05:18.499 17:51:35 -- setup/common.sh@19 -- # local var val 00:05:18.499 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.499 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.499 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.499 17:51:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.499 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.499 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782228 kB' 'MemAvailable: 9451064 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 468232 kB' 'Inactive: 2525552 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120096 kB' 'Mapped: 50984 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186980 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104960 kB' 'KernelStack: 6976 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56052 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.499 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.499 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.500 17:51:35 -- setup/common.sh@33 -- # echo 0 00:05:18.500 17:51:35 -- setup/common.sh@33 -- # return 0 00:05:18.500 17:51:35 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.500 17:51:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.500 17:51:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.500 17:51:35 -- setup/common.sh@18 -- # local node= 00:05:18.500 17:51:35 -- setup/common.sh@19 -- # local var val 00:05:18.500 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.500 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.500 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.500 17:51:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.500 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.500 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782228 kB' 'MemAvailable: 9451064 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 467616 kB' 'Inactive: 2525552 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 50856 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186980 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104960 kB' 'KernelStack: 6944 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 56020 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.500 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.500 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.501 17:51:35 -- setup/common.sh@33 -- # echo 0 00:05:18.501 17:51:35 -- setup/common.sh@33 -- # return 0 00:05:18.501 17:51:35 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.501 17:51:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.501 17:51:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.501 17:51:35 -- setup/common.sh@18 -- # local node= 00:05:18.501 17:51:35 -- setup/common.sh@19 -- # local var val 00:05:18.501 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.501 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.501 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.501 17:51:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.501 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.501 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.501 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782228 kB' 'MemAvailable: 9451064 kB' 'Buffers: 3456 kB' 'Cached: 2871812 kB' 'SwapCached: 0 kB' 'Active: 467920 kB' 'Inactive: 2525552 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 50908 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186972 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104952 kB' 'KernelStack: 6960 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55988 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.501 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.501 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.502 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.502 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.503 17:51:35 -- setup/common.sh@33 -- # echo 0 00:05:18.503 17:51:35 -- setup/common.sh@33 -- # return 0 00:05:18.503 17:51:35 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.503 17:51:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:18.503 nr_hugepages=1025 00:05:18.503 resv_hugepages=0 00:05:18.503 17:51:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.503 surplus_hugepages=0 00:05:18.503 17:51:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.503 17:51:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.503 anon_hugepages=0 00:05:18.503 17:51:35 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:18.503 17:51:35 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:18.503 17:51:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.503 17:51:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.503 17:51:35 -- setup/common.sh@18 -- # local node= 00:05:18.503 17:51:35 -- setup/common.sh@19 -- # local var val 00:05:18.503 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.503 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.503 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.503 17:51:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.503 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.503 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782228 kB' 'MemAvailable: 9451064 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 467908 kB' 'Inactive: 2525552 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50856 kB' 'Shmem: 10496 kB' 'KReclaimable: 82020 kB' 'Slab: 186940 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104920 kB' 'KernelStack: 6896 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 319480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55988 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.503 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.503 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.504 17:51:35 -- setup/common.sh@33 -- # echo 1025 00:05:18.504 17:51:35 -- setup/common.sh@33 -- # return 0 00:05:18.504 17:51:35 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:18.504 17:51:35 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.504 17:51:35 -- setup/hugepages.sh@27 -- # local node 00:05:18.504 17:51:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.504 17:51:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:18.504 17:51:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.504 17:51:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.504 17:51:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.504 17:51:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.504 17:51:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.504 17:51:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.504 17:51:35 -- setup/common.sh@18 -- # local node=0 00:05:18.504 17:51:35 -- setup/common.sh@19 -- # local var val 00:05:18.504 17:51:35 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.504 17:51:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.504 17:51:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.504 17:51:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.504 17:51:35 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.504 17:51:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6782228 kB' 'MemUsed: 5456888 kB' 'SwapCached: 0 kB' 'Active: 467832 kB' 'Inactive: 2525552 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2875272 kB' 'Mapped: 50856 kB' 'AnonPages: 119748 kB' 'Shmem: 10496 kB' 'KernelStack: 6944 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82020 kB' 'Slab: 186940 kB' 'SReclaimable: 82020 kB' 'SUnreclaim: 104920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.504 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.504 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # continue 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.505 17:51:35 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.505 17:51:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.505 17:51:35 -- setup/common.sh@33 -- # echo 0 00:05:18.505 17:51:35 -- setup/common.sh@33 -- # return 0 00:05:18.505 17:51:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.505 17:51:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.505 17:51:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.505 17:51:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.505 17:51:35 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:18.505 node0=1025 expecting 1025 00:05:18.505 17:51:35 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:18.505 00:05:18.505 real 0m0.973s 00:05:18.505 user 0m0.425s 00:05:18.505 sys 0m0.617s 00:05:18.505 17:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.505 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.505 ************************************ 00:05:18.505 END TEST odd_alloc 00:05:18.505 ************************************ 00:05:18.765 17:51:35 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:18.765 17:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.765 17:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.765 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.765 ************************************ 00:05:18.765 START TEST custom_alloc 00:05:18.765 ************************************ 00:05:18.765 17:51:35 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:18.765 17:51:35 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:18.765 17:51:35 -- setup/hugepages.sh@169 -- # local node 00:05:18.765 17:51:35 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:18.765 17:51:35 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:18.765 17:51:35 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:18.765 17:51:35 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:18.765 17:51:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:18.765 17:51:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:18.765 17:51:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.765 17:51:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:18.766 17:51:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.766 17:51:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.766 17:51:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.766 17:51:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.766 17:51:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.766 17:51:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@83 -- # : 0 00:05:18.766 17:51:35 -- setup/hugepages.sh@84 -- # : 0 00:05:18.766 17:51:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:18.766 17:51:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:18.766 17:51:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:18.766 17:51:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.766 17:51:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.766 17:51:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.766 17:51:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.766 17:51:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.766 17:51:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:18.766 17:51:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:18.766 17:51:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:18.766 17:51:35 -- setup/hugepages.sh@78 -- # return 0 00:05:18.766 17:51:35 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:18.766 17:51:35 -- setup/hugepages.sh@187 -- # setup output 00:05:18.766 17:51:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.766 17:51:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.335 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.335 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.335 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.335 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.597 17:51:36 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:19.597 17:51:36 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:19.597 17:51:36 -- setup/hugepages.sh@89 -- # local node 00:05:19.597 17:51:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.597 17:51:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.597 17:51:36 -- setup/hugepages.sh@92 -- # local surp 00:05:19.597 17:51:36 -- setup/hugepages.sh@93 -- # local resv 00:05:19.597 17:51:36 -- setup/hugepages.sh@94 -- # local anon 00:05:19.597 17:51:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.597 17:51:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.597 17:51:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.597 17:51:36 -- setup/common.sh@18 -- # local node= 00:05:19.597 17:51:36 -- setup/common.sh@19 -- # local var val 00:05:19.597 17:51:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.597 17:51:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.597 17:51:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.597 17:51:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.597 17:51:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.597 17:51:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7837240 kB' 'MemAvailable: 10506072 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465776 kB' 'Inactive: 2525556 kB' 'Active(anon): 126556 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117672 kB' 'Mapped: 50312 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186676 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104672 kB' 'KernelStack: 6844 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 306024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.597 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 17:51:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.598 17:51:36 -- setup/common.sh@33 -- # echo 0 00:05:19.598 17:51:36 -- setup/common.sh@33 -- # return 0 00:05:19.598 17:51:36 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.598 17:51:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.598 17:51:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.598 17:51:36 -- setup/common.sh@18 -- # local node= 00:05:19.598 17:51:36 -- setup/common.sh@19 -- # local var val 00:05:19.598 17:51:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.598 17:51:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.598 17:51:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.598 17:51:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.598 17:51:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.599 17:51:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7837500 kB' 'MemAvailable: 10506332 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465864 kB' 'Inactive: 2525556 kB' 'Active(anon): 126644 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117596 kB' 'Mapped: 50364 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186672 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104668 kB' 'KernelStack: 6876 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 304144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55876 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 17:51:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.600 17:51:36 -- setup/common.sh@33 -- # echo 0 00:05:19.600 17:51:36 -- setup/common.sh@33 -- # return 0 00:05:19.600 17:51:36 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.600 17:51:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.600 17:51:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.600 17:51:36 -- setup/common.sh@18 -- # local node= 00:05:19.600 17:51:36 -- setup/common.sh@19 -- # local var val 00:05:19.600 17:51:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.600 17:51:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.600 17:51:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.600 17:51:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.600 17:51:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.600 17:51:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.600 17:51:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7837744 kB' 'MemAvailable: 10506576 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465804 kB' 'Inactive: 2525556 kB' 'Active(anon): 126584 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117716 kB' 'Mapped: 50020 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186668 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104664 kB' 'KernelStack: 6848 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 304144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55876 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 17:51:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.601 17:51:36 -- setup/common.sh@33 -- # echo 0 00:05:19.601 17:51:36 -- setup/common.sh@33 -- # return 0 00:05:19.601 17:51:36 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.601 17:51:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:19.601 nr_hugepages=512 00:05:19.601 resv_hugepages=0 00:05:19.601 17:51:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.601 surplus_hugepages=0 00:05:19.601 17:51:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.601 anon_hugepages=0 00:05:19.601 17:51:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.601 17:51:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.601 17:51:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:19.601 17:51:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.601 17:51:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.601 17:51:36 -- setup/common.sh@18 -- # local node= 00:05:19.601 17:51:36 -- setup/common.sh@19 -- # local var val 00:05:19.601 17:51:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.601 17:51:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.601 17:51:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.601 17:51:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.601 17:51:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.601 17:51:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.602 17:51:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7837744 kB' 'MemAvailable: 10506576 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465532 kB' 'Inactive: 2525556 kB' 'Active(anon): 126312 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117448 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186668 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104664 kB' 'KernelStack: 6832 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 304144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55876 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.603 17:51:36 -- setup/common.sh@33 -- # echo 512 00:05:19.603 17:51:36 -- setup/common.sh@33 -- # return 0 00:05:19.603 17:51:36 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.603 17:51:36 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.603 17:51:36 -- setup/hugepages.sh@27 -- # local node 00:05:19.603 17:51:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.603 17:51:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.603 17:51:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.603 17:51:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.603 17:51:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.603 17:51:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.603 17:51:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.603 17:51:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.603 17:51:36 -- setup/common.sh@18 -- # local node=0 00:05:19.603 17:51:36 -- setup/common.sh@19 -- # local var val 00:05:19.603 17:51:36 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.603 17:51:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.603 17:51:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.603 17:51:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.603 17:51:36 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.603 17:51:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7837744 kB' 'MemUsed: 4401372 kB' 'SwapCached: 0 kB' 'Active: 465536 kB' 'Inactive: 2525556 kB' 'Active(anon): 126316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2875272 kB' 'Mapped: 50008 kB' 'AnonPages: 117444 kB' 'Shmem: 10496 kB' 'KernelStack: 6832 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82004 kB' 'Slab: 186668 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # continue 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.604 17:51:36 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.604 17:51:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.604 17:51:36 -- setup/common.sh@33 -- # echo 0 00:05:19.604 17:51:36 -- setup/common.sh@33 -- # return 0 00:05:19.604 17:51:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.604 17:51:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.604 17:51:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.604 node0=512 expecting 512 00:05:19.604 17:51:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.604 17:51:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.604 ************************************ 00:05:19.604 END TEST custom_alloc 00:05:19.604 ************************************ 00:05:19.604 17:51:36 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:19.604 00:05:19.604 real 0m1.002s 00:05:19.604 user 0m0.409s 00:05:19.604 sys 0m0.651s 00:05:19.604 17:51:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.604 17:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:19.604 17:51:36 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:19.604 17:51:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.604 17:51:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.604 17:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:19.863 ************************************ 00:05:19.863 START TEST no_shrink_alloc 00:05:19.863 ************************************ 00:05:19.863 17:51:36 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:19.863 17:51:36 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:19.863 17:51:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.863 17:51:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.863 17:51:36 -- setup/hugepages.sh@51 -- # shift 00:05:19.863 17:51:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.863 17:51:36 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.863 17:51:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.863 17:51:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.863 17:51:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.863 17:51:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.863 17:51:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.863 17:51:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.863 17:51:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.863 17:51:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.863 17:51:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.863 17:51:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.863 17:51:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.863 17:51:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:19.863 17:51:36 -- setup/hugepages.sh@73 -- # return 0 00:05:19.863 17:51:36 -- setup/hugepages.sh@198 -- # setup output 00:05:19.863 17:51:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.863 17:51:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.430 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.430 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.430 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.430 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.692 17:51:37 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:20.692 17:51:37 -- setup/hugepages.sh@89 -- # local node 00:05:20.692 17:51:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.692 17:51:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.692 17:51:37 -- setup/hugepages.sh@92 -- # local surp 00:05:20.692 17:51:37 -- setup/hugepages.sh@93 -- # local resv 00:05:20.692 17:51:37 -- setup/hugepages.sh@94 -- # local anon 00:05:20.692 17:51:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.692 17:51:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.692 17:51:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.692 17:51:37 -- setup/common.sh@18 -- # local node= 00:05:20.692 17:51:37 -- setup/common.sh@19 -- # local var val 00:05:20.692 17:51:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.692 17:51:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.692 17:51:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.692 17:51:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.692 17:51:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.692 17:51:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.692 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.692 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.692 17:51:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6794444 kB' 'MemAvailable: 9463276 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465652 kB' 'Inactive: 2525556 kB' 'Active(anon): 126432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117824 kB' 'Mapped: 50024 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186688 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104684 kB' 'KernelStack: 6860 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55972 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:20.692 17:51:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.692 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.692 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.693 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.693 17:51:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.694 17:51:37 -- setup/common.sh@33 -- # echo 0 00:05:20.694 17:51:37 -- setup/common.sh@33 -- # return 0 00:05:20.694 17:51:37 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.694 17:51:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.694 17:51:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.694 17:51:37 -- setup/common.sh@18 -- # local node= 00:05:20.694 17:51:37 -- setup/common.sh@19 -- # local var val 00:05:20.694 17:51:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.694 17:51:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.694 17:51:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.694 17:51:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.694 17:51:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.694 17:51:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6794192 kB' 'MemAvailable: 9463024 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465604 kB' 'Inactive: 2525556 kB' 'Active(anon): 126384 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117560 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186696 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104692 kB' 'KernelStack: 6848 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.694 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.694 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.695 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.695 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.696 17:51:37 -- setup/common.sh@33 -- # echo 0 00:05:20.696 17:51:37 -- setup/common.sh@33 -- # return 0 00:05:20.696 17:51:37 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.696 17:51:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.696 17:51:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.696 17:51:37 -- setup/common.sh@18 -- # local node= 00:05:20.696 17:51:37 -- setup/common.sh@19 -- # local var val 00:05:20.696 17:51:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.696 17:51:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.696 17:51:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.696 17:51:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.696 17:51:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.696 17:51:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6794192 kB' 'MemAvailable: 9463024 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465364 kB' 'Inactive: 2525556 kB' 'Active(anon): 126144 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117304 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186692 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104688 kB' 'KernelStack: 6848 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.696 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.696 17:51:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.697 17:51:37 -- setup/common.sh@33 -- # echo 0 00:05:20.697 17:51:37 -- setup/common.sh@33 -- # return 0 00:05:20.697 nr_hugepages=1024 00:05:20.697 resv_hugepages=0 00:05:20.697 surplus_hugepages=0 00:05:20.697 anon_hugepages=0 00:05:20.697 17:51:37 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.697 17:51:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.697 17:51:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.697 17:51:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.697 17:51:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.697 17:51:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.697 17:51:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.697 17:51:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.697 17:51:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.697 17:51:37 -- setup/common.sh@18 -- # local node= 00:05:20.697 17:51:37 -- setup/common.sh@19 -- # local var val 00:05:20.697 17:51:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.697 17:51:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.697 17:51:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.697 17:51:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.697 17:51:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.697 17:51:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6794192 kB' 'MemAvailable: 9463024 kB' 'Buffers: 3456 kB' 'Cached: 2871816 kB' 'SwapCached: 0 kB' 'Active: 465624 kB' 'Inactive: 2525556 kB' 'Active(anon): 126404 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117564 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186692 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104688 kB' 'KernelStack: 6848 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.697 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.697 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.698 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.698 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.699 17:51:37 -- setup/common.sh@33 -- # echo 1024 00:05:20.699 17:51:37 -- setup/common.sh@33 -- # return 0 00:05:20.699 17:51:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.699 17:51:37 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.699 17:51:37 -- setup/hugepages.sh@27 -- # local node 00:05:20.699 17:51:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.699 17:51:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.699 17:51:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.699 17:51:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.699 17:51:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.699 17:51:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.699 17:51:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.699 17:51:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.699 17:51:37 -- setup/common.sh@18 -- # local node=0 00:05:20.699 17:51:37 -- setup/common.sh@19 -- # local var val 00:05:20.699 17:51:37 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.699 17:51:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.699 17:51:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.699 17:51:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.699 17:51:37 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.699 17:51:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6794192 kB' 'MemUsed: 5444924 kB' 'SwapCached: 0 kB' 'Active: 465596 kB' 'Inactive: 2525556 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2875272 kB' 'Mapped: 50008 kB' 'AnonPages: 117560 kB' 'Shmem: 10496 kB' 'KernelStack: 6848 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82004 kB' 'Slab: 186692 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.699 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.699 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # continue 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.700 17:51:37 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.700 17:51:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.700 17:51:37 -- setup/common.sh@33 -- # echo 0 00:05:20.700 17:51:37 -- setup/common.sh@33 -- # return 0 00:05:20.700 node0=1024 expecting 1024 00:05:20.700 17:51:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.700 17:51:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.700 17:51:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.700 17:51:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.700 17:51:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.700 17:51:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.700 17:51:37 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:20.700 17:51:37 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:20.700 17:51:37 -- setup/hugepages.sh@202 -- # setup output 00:05:20.700 17:51:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.700 17:51:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.640 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.640 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.640 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.640 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.640 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:21.640 17:51:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:21.640 17:51:38 -- setup/hugepages.sh@89 -- # local node 00:05:21.640 17:51:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.640 17:51:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.640 17:51:38 -- setup/hugepages.sh@92 -- # local surp 00:05:21.640 17:51:38 -- setup/hugepages.sh@93 -- # local resv 00:05:21.640 17:51:38 -- setup/hugepages.sh@94 -- # local anon 00:05:21.640 17:51:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.640 17:51:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.640 17:51:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.640 17:51:38 -- setup/common.sh@18 -- # local node= 00:05:21.640 17:51:38 -- setup/common.sh@19 -- # local var val 00:05:21.640 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.640 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.640 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.640 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.640 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.640 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6799764 kB' 'MemAvailable: 9468600 kB' 'Buffers: 3456 kB' 'Cached: 2871820 kB' 'SwapCached: 0 kB' 'Active: 466376 kB' 'Inactive: 2525560 kB' 'Active(anon): 127156 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118108 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186688 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104684 kB' 'KernelStack: 6892 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 306788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55956 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.640 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.640 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.642 17:51:38 -- setup/common.sh@33 -- # echo 0 00:05:21.642 17:51:38 -- setup/common.sh@33 -- # return 0 00:05:21.642 17:51:38 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.642 17:51:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.642 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.642 17:51:38 -- setup/common.sh@18 -- # local node= 00:05:21.642 17:51:38 -- setup/common.sh@19 -- # local var val 00:05:21.642 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.642 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.642 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.642 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.642 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.642 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6799764 kB' 'MemAvailable: 9468600 kB' 'Buffers: 3456 kB' 'Cached: 2871820 kB' 'SwapCached: 0 kB' 'Active: 465552 kB' 'Inactive: 2525560 kB' 'Active(anon): 126332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117468 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186616 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104612 kB' 'KernelStack: 6832 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55908 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.642 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.643 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.643 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.643 17:51:38 -- setup/common.sh@33 -- # echo 0 00:05:21.643 17:51:38 -- setup/common.sh@33 -- # return 0 00:05:21.643 17:51:38 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.644 17:51:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.644 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.644 17:51:38 -- setup/common.sh@18 -- # local node= 00:05:21.644 17:51:38 -- setup/common.sh@19 -- # local var val 00:05:21.644 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.644 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.644 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.644 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.644 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.644 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6799764 kB' 'MemAvailable: 9468600 kB' 'Buffers: 3456 kB' 'Cached: 2871820 kB' 'SwapCached: 0 kB' 'Active: 465616 kB' 'Inactive: 2525560 kB' 'Active(anon): 126396 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117576 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186612 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104608 kB' 'KernelStack: 6848 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55924 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.644 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.644 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.645 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.645 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.645 17:51:38 -- setup/common.sh@33 -- # echo 0 00:05:21.645 17:51:38 -- setup/common.sh@33 -- # return 0 00:05:21.645 nr_hugepages=1024 00:05:21.645 resv_hugepages=0 00:05:21.645 surplus_hugepages=0 00:05:21.645 anon_hugepages=0 00:05:21.645 17:51:38 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.645 17:51:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.645 17:51:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.645 17:51:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.645 17:51:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.645 17:51:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.646 17:51:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.646 17:51:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.646 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.646 17:51:38 -- setup/common.sh@18 -- # local node= 00:05:21.646 17:51:38 -- setup/common.sh@19 -- # local var val 00:05:21.646 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.646 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.646 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.646 17:51:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.646 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.646 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.646 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6799764 kB' 'MemAvailable: 9468600 kB' 'Buffers: 3456 kB' 'Cached: 2871820 kB' 'SwapCached: 0 kB' 'Active: 465552 kB' 'Inactive: 2525560 kB' 'Active(anon): 126332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117468 kB' 'Mapped: 50008 kB' 'Shmem: 10496 kB' 'KReclaimable: 82004 kB' 'Slab: 186612 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104608 kB' 'KernelStack: 6832 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55940 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.646 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.646 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.647 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.647 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.648 17:51:38 -- setup/common.sh@33 -- # echo 1024 00:05:21.648 17:51:38 -- setup/common.sh@33 -- # return 0 00:05:21.648 17:51:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.648 17:51:38 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.648 17:51:38 -- setup/hugepages.sh@27 -- # local node 00:05:21.648 17:51:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.648 17:51:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.648 17:51:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.648 17:51:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.648 17:51:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.648 17:51:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.648 17:51:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.648 17:51:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.648 17:51:38 -- setup/common.sh@18 -- # local node=0 00:05:21.648 17:51:38 -- setup/common.sh@19 -- # local var val 00:05:21.648 17:51:38 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.648 17:51:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.648 17:51:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.648 17:51:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.648 17:51:38 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.648 17:51:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.648 17:51:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6799512 kB' 'MemUsed: 5439604 kB' 'SwapCached: 0 kB' 'Active: 465556 kB' 'Inactive: 2525560 kB' 'Active(anon): 126336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339220 kB' 'Inactive(file): 2525560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2875276 kB' 'Mapped: 50008 kB' 'AnonPages: 117472 kB' 'Shmem: 10496 kB' 'KernelStack: 6832 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82004 kB' 'Slab: 186612 kB' 'SReclaimable: 82004 kB' 'SUnreclaim: 104608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.648 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.648 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # continue 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.649 17:51:38 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.649 17:51:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.649 17:51:38 -- setup/common.sh@33 -- # echo 0 00:05:21.649 17:51:38 -- setup/common.sh@33 -- # return 0 00:05:21.649 17:51:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.649 17:51:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.649 17:51:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.649 node0=1024 expecting 1024 00:05:21.649 17:51:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.649 17:51:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.649 17:51:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.649 00:05:21.649 real 0m2.033s 00:05:21.649 user 0m0.845s 00:05:21.649 sys 0m1.270s 00:05:21.649 ************************************ 00:05:21.649 17:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.649 17:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.649 END TEST no_shrink_alloc 00:05:21.908 ************************************ 00:05:21.908 17:51:38 -- setup/hugepages.sh@217 -- # clear_hp 00:05:21.908 17:51:38 -- setup/hugepages.sh@37 -- # local node hp 00:05:21.908 17:51:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.908 17:51:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.908 17:51:38 -- setup/hugepages.sh@41 -- # echo 0 00:05:21.908 17:51:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.908 17:51:38 -- setup/hugepages.sh@41 -- # echo 0 00:05:21.909 17:51:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:21.909 17:51:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:21.909 00:05:21.909 real 0m8.486s 00:05:21.909 user 0m3.479s 00:05:21.909 sys 0m5.325s 00:05:21.909 17:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.909 17:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.909 ************************************ 00:05:21.909 END TEST hugepages 00:05:21.909 ************************************ 00:05:21.909 17:51:38 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:21.909 17:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.909 17:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.909 17:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.909 ************************************ 00:05:21.909 START TEST driver 00:05:21.909 ************************************ 00:05:21.909 17:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:21.909 * Looking for test storage... 00:05:21.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.909 17:51:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.909 17:51:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.909 17:51:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.168 17:51:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.168 17:51:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.168 17:51:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.168 17:51:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.168 17:51:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.168 17:51:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.168 17:51:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.168 17:51:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.168 17:51:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.168 17:51:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.168 17:51:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.168 17:51:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.168 17:51:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.168 17:51:38 -- scripts/common.sh@344 -- # : 1 00:05:22.168 17:51:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.168 17:51:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.168 17:51:38 -- scripts/common.sh@364 -- # decimal 1 00:05:22.168 17:51:38 -- scripts/common.sh@352 -- # local d=1 00:05:22.168 17:51:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.168 17:51:38 -- scripts/common.sh@354 -- # echo 1 00:05:22.168 17:51:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.168 17:51:38 -- scripts/common.sh@365 -- # decimal 2 00:05:22.168 17:51:38 -- scripts/common.sh@352 -- # local d=2 00:05:22.168 17:51:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.168 17:51:38 -- scripts/common.sh@354 -- # echo 2 00:05:22.168 17:51:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.168 17:51:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.168 17:51:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.168 17:51:38 -- scripts/common.sh@367 -- # return 0 00:05:22.168 17:51:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.168 17:51:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.168 --rc genhtml_branch_coverage=1 00:05:22.168 --rc genhtml_function_coverage=1 00:05:22.168 --rc genhtml_legend=1 00:05:22.168 --rc geninfo_all_blocks=1 00:05:22.168 --rc geninfo_unexecuted_blocks=1 00:05:22.168 00:05:22.168 ' 00:05:22.168 17:51:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.168 --rc genhtml_branch_coverage=1 00:05:22.168 --rc genhtml_function_coverage=1 00:05:22.168 --rc genhtml_legend=1 00:05:22.168 --rc geninfo_all_blocks=1 00:05:22.169 --rc geninfo_unexecuted_blocks=1 00:05:22.169 00:05:22.169 ' 00:05:22.169 17:51:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.169 --rc genhtml_branch_coverage=1 00:05:22.169 --rc genhtml_function_coverage=1 00:05:22.169 --rc genhtml_legend=1 00:05:22.169 --rc geninfo_all_blocks=1 00:05:22.169 --rc geninfo_unexecuted_blocks=1 00:05:22.169 00:05:22.169 ' 00:05:22.169 17:51:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.169 --rc genhtml_branch_coverage=1 00:05:22.169 --rc genhtml_function_coverage=1 00:05:22.169 --rc genhtml_legend=1 00:05:22.169 --rc geninfo_all_blocks=1 00:05:22.169 --rc geninfo_unexecuted_blocks=1 00:05:22.169 00:05:22.169 ' 00:05:22.169 17:51:38 -- setup/driver.sh@68 -- # setup reset 00:05:22.169 17:51:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.169 17:51:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.732 17:51:45 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:28.733 17:51:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.733 17:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.733 17:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.733 ************************************ 00:05:28.733 START TEST guess_driver 00:05:28.733 ************************************ 00:05:28.733 17:51:45 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:28.733 17:51:45 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:28.733 17:51:45 -- setup/driver.sh@47 -- # local fail=0 00:05:28.733 17:51:45 -- setup/driver.sh@49 -- # pick_driver 00:05:28.733 17:51:45 -- setup/driver.sh@36 -- # vfio 00:05:28.733 17:51:45 -- setup/driver.sh@21 -- # local iommu_grups 00:05:28.733 17:51:45 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:28.733 17:51:45 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:28.733 17:51:45 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:28.733 17:51:45 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:28.733 17:51:45 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:28.733 17:51:45 -- setup/driver.sh@32 -- # return 1 00:05:28.733 17:51:45 -- setup/driver.sh@38 -- # uio 00:05:28.733 17:51:45 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:28.733 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:28.733 17:51:45 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:28.733 Looking for driver=uio_pci_generic 00:05:28.733 17:51:45 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:28.733 17:51:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.733 17:51:45 -- setup/driver.sh@45 -- # setup output config 00:05:28.733 17:51:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.733 17:51:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.678 17:51:46 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:29.678 17:51:46 -- setup/driver.sh@58 -- # continue 00:05:29.678 17:51:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.937 17:51:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.937 17:51:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:29.937 17:51:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.937 17:51:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.937 17:51:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:29.938 17:51:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.938 17:51:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.938 17:51:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:29.938 17:51:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.197 17:51:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:30.197 17:51:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:30.197 17:51:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:30.197 17:51:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:30.197 17:51:46 -- setup/driver.sh@65 -- # setup reset 00:05:30.197 17:51:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.197 17:51:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.769 ************************************ 00:05:36.769 END TEST guess_driver 00:05:36.769 ************************************ 00:05:36.769 00:05:36.769 real 0m8.071s 00:05:36.769 user 0m1.058s 00:05:36.769 sys 0m2.259s 00:05:36.769 17:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.769 17:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:36.769 ************************************ 00:05:36.769 END TEST driver 00:05:36.769 ************************************ 00:05:36.769 00:05:36.769 real 0m14.714s 00:05:36.769 user 0m1.687s 00:05:36.769 sys 0m3.564s 00:05:36.769 17:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.769 17:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:36.769 17:51:53 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:36.769 17:51:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.769 17:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.769 17:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:36.769 ************************************ 00:05:36.769 START TEST devices 00:05:36.769 ************************************ 00:05:36.769 17:51:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:36.769 * Looking for test storage... 00:05:36.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.769 17:51:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.769 17:51:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.769 17:51:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.769 17:51:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.769 17:51:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.769 17:51:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.769 17:51:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.769 17:51:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.769 17:51:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.769 17:51:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.769 17:51:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.769 17:51:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.769 17:51:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.769 17:51:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.769 17:51:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.769 17:51:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.769 17:51:53 -- scripts/common.sh@344 -- # : 1 00:05:36.769 17:51:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.769 17:51:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.769 17:51:53 -- scripts/common.sh@364 -- # decimal 1 00:05:36.769 17:51:53 -- scripts/common.sh@352 -- # local d=1 00:05:36.769 17:51:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.769 17:51:53 -- scripts/common.sh@354 -- # echo 1 00:05:36.769 17:51:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.769 17:51:53 -- scripts/common.sh@365 -- # decimal 2 00:05:37.028 17:51:53 -- scripts/common.sh@352 -- # local d=2 00:05:37.028 17:51:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.028 17:51:53 -- scripts/common.sh@354 -- # echo 2 00:05:37.028 17:51:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.028 17:51:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.028 17:51:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.028 17:51:53 -- scripts/common.sh@367 -- # return 0 00:05:37.028 17:51:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.028 17:51:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.028 --rc genhtml_branch_coverage=1 00:05:37.028 --rc genhtml_function_coverage=1 00:05:37.028 --rc genhtml_legend=1 00:05:37.028 --rc geninfo_all_blocks=1 00:05:37.028 --rc geninfo_unexecuted_blocks=1 00:05:37.028 00:05:37.028 ' 00:05:37.028 17:51:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.028 --rc genhtml_branch_coverage=1 00:05:37.028 --rc genhtml_function_coverage=1 00:05:37.028 --rc genhtml_legend=1 00:05:37.028 --rc geninfo_all_blocks=1 00:05:37.028 --rc geninfo_unexecuted_blocks=1 00:05:37.028 00:05:37.028 ' 00:05:37.028 17:51:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.028 --rc genhtml_branch_coverage=1 00:05:37.028 --rc genhtml_function_coverage=1 00:05:37.028 --rc genhtml_legend=1 00:05:37.028 --rc geninfo_all_blocks=1 00:05:37.028 --rc geninfo_unexecuted_blocks=1 00:05:37.028 00:05:37.028 ' 00:05:37.028 17:51:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.028 --rc genhtml_branch_coverage=1 00:05:37.028 --rc genhtml_function_coverage=1 00:05:37.028 --rc genhtml_legend=1 00:05:37.028 --rc geninfo_all_blocks=1 00:05:37.028 --rc geninfo_unexecuted_blocks=1 00:05:37.028 00:05:37.028 ' 00:05:37.028 17:51:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:37.028 17:51:53 -- setup/devices.sh@192 -- # setup reset 00:05:37.028 17:51:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.028 17:51:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.435 17:51:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:38.435 17:51:55 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:38.435 17:51:55 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:38.435 17:51:55 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.435 17:51:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:05:38.435 17:51:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:38.435 17:51:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.435 17:51:55 -- setup/devices.sh@196 -- # blocks=() 00:05:38.435 17:51:55 -- setup/devices.sh@196 -- # declare -a blocks 00:05:38.435 17:51:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:38.435 17:51:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:38.435 17:51:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:38.435 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.435 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:38.435 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:38.435 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:05:38.435 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:38.435 17:51:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:38.435 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:38.435 No valid GPT data, bailing 00:05:38.435 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.435 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.435 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:38.435 17:51:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:38.435 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:38.435 17:51:55 -- setup/common.sh@80 -- # echo 1073741824 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:38.435 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.435 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:38.435 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:38.435 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:38.435 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:38.435 17:51:55 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:38.435 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:38.435 No valid GPT data, bailing 00:05:38.435 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:38.435 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.435 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:38.435 17:51:55 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:38.435 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:38.435 17:51:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:38.435 17:51:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:38.435 17:51:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:38.435 17:51:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:38.435 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.435 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:38.436 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:38.436 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:38.436 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:38.436 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:38.436 17:51:55 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:38.436 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:38.436 No valid GPT data, bailing 00:05:38.436 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:38.436 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.436 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.436 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:38.436 17:51:55 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:38.436 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:38.436 17:51:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:38.436 17:51:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:38.436 17:51:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:38.436 17:51:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:38.436 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.436 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:38.436 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:38.436 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:05:38.436 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:05:38.436 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:38.436 17:51:55 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:38.436 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:38.695 No valid GPT data, bailing 00:05:38.695 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:38.695 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.695 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.695 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:38.695 17:51:55 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:38.695 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:38.695 17:51:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:38.695 17:51:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:38.695 17:51:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:38.695 17:51:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:05:38.695 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.695 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:38.695 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:38.695 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:38.695 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:38.695 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:38.696 17:51:55 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:05:38.696 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:38.696 No valid GPT data, bailing 00:05:38.696 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:38.696 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.696 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.696 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:38.696 17:51:55 -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:38.696 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:38.696 17:51:55 -- setup/common.sh@80 -- # echo 6343335936 00:05:38.696 17:51:55 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:38.696 17:51:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:38.696 17:51:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:38.696 17:51:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:38.696 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:38.696 17:51:55 -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:38.696 17:51:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:38.696 17:51:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:38.696 17:51:55 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:38.696 17:51:55 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:05:38.696 17:51:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:38.696 No valid GPT data, bailing 00:05:38.696 17:51:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:38.696 17:51:55 -- scripts/common.sh@393 -- # pt= 00:05:38.696 17:51:55 -- scripts/common.sh@394 -- # return 1 00:05:38.696 17:51:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:38.696 17:51:55 -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:38.696 17:51:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:38.696 17:51:55 -- setup/common.sh@80 -- # echo 5368709120 00:05:38.696 17:51:55 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:38.696 17:51:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:38.696 17:51:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:38.696 17:51:55 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:38.696 17:51:55 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:05:38.696 17:51:55 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:38.696 17:51:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.696 17:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.696 17:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:38.696 ************************************ 00:05:38.696 START TEST nvme_mount 00:05:38.696 ************************************ 00:05:38.696 17:51:55 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:38.696 17:51:55 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:05:38.696 17:51:55 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:05:38.696 17:51:55 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.696 17:51:55 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:38.696 17:51:55 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:05:38.696 17:51:55 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:38.696 17:51:55 -- setup/common.sh@40 -- # local part_no=1 00:05:38.696 17:51:55 -- setup/common.sh@41 -- # local size=1073741824 00:05:38.696 17:51:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:38.696 17:51:55 -- setup/common.sh@44 -- # parts=() 00:05:38.696 17:51:55 -- setup/common.sh@44 -- # local parts 00:05:38.696 17:51:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:38.696 17:51:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.696 17:51:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:38.696 17:51:55 -- setup/common.sh@46 -- # (( part++ )) 00:05:38.696 17:51:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:38.696 17:51:55 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:38.696 17:51:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:38.696 17:51:55 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:05:40.075 Creating new GPT entries in memory. 00:05:40.075 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.075 other utilities. 00:05:40.075 17:51:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.075 17:51:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.075 17:51:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.075 17:51:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.075 17:51:56 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:41.013 Creating new GPT entries in memory. 00:05:41.013 The operation has completed successfully. 00:05:41.013 17:51:57 -- setup/common.sh@57 -- # (( part++ )) 00:05:41.013 17:51:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.013 17:51:57 -- setup/common.sh@62 -- # wait 66194 00:05:41.013 17:51:57 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.013 17:51:57 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:41.013 17:51:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.013 17:51:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:41.013 17:51:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:41.013 17:51:57 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.013 17:51:57 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:41.013 17:51:57 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:41.013 17:51:57 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:41.013 17:51:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.013 17:51:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:41.013 17:51:57 -- setup/devices.sh@53 -- # local found=0 00:05:41.013 17:51:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:41.013 17:51:57 -- setup/devices.sh@56 -- # : 00:05:41.013 17:51:57 -- setup/devices.sh@59 -- # local pci status 00:05:41.013 17:51:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.013 17:51:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:41.013 17:51:57 -- setup/devices.sh@47 -- # setup output config 00:05:41.013 17:51:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.013 17:51:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:41.013 17:51:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.013 17:51:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.272 17:51:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.272 17:51:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.531 17:51:58 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.531 17:51:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:41.531 17:51:58 -- setup/devices.sh@63 -- # found=1 00:05:41.531 17:51:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.531 17:51:58 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.531 17:51:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.791 17:51:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.791 17:51:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.791 17:51:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:41.791 17:51:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.050 17:51:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.050 17:51:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:42.050 17:51:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.050 17:51:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.050 17:51:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.050 17:51:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:42.050 17:51:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.050 17:51:58 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.050 17:51:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:42.050 17:51:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:42.050 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.050 17:51:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:42.050 17:51:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:42.309 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:42.309 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:42.309 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:42.309 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:42.309 17:51:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:42.309 17:51:59 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:42.309 17:51:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.309 17:51:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:42.309 17:51:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:42.309 17:51:59 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.309 17:51:59 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.309 17:51:59 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:42.309 17:51:59 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:42.309 17:51:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.309 17:51:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.309 17:51:59 -- setup/devices.sh@53 -- # local found=0 00:05:42.309 17:51:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.309 17:51:59 -- setup/devices.sh@56 -- # : 00:05:42.309 17:51:59 -- setup/devices.sh@59 -- # local pci status 00:05:42.309 17:51:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:42.309 17:51:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.309 17:51:59 -- setup/devices.sh@47 -- # setup output config 00:05:42.309 17:51:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.309 17:51:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.568 17:51:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:42.568 17:51:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.827 17:51:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:42.827 17:51:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.086 17:51:59 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.086 17:51:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:43.086 17:51:59 -- setup/devices.sh@63 -- # found=1 00:05:43.086 17:51:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.086 17:51:59 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.086 17:51:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.345 17:52:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.345 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.345 17:52:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.345 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.604 17:52:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.604 17:52:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:43.604 17:52:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.604 17:52:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.604 17:52:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.604 17:52:00 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.604 17:52:00 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:43.604 17:52:00 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:43.604 17:52:00 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:43.604 17:52:00 -- setup/devices.sh@50 -- # local mount_point= 00:05:43.604 17:52:00 -- setup/devices.sh@51 -- # local test_file= 00:05:43.604 17:52:00 -- setup/devices.sh@53 -- # local found=0 00:05:43.604 17:52:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:43.604 17:52:00 -- setup/devices.sh@59 -- # local pci status 00:05:43.604 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.604 17:52:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:43.604 17:52:00 -- setup/devices.sh@47 -- # setup output config 00:05:43.604 17:52:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.604 17:52:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.863 17:52:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.863 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.863 17:52:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:43.864 17:52:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.430 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:44.430 17:52:01 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:44.430 17:52:01 -- setup/devices.sh@63 -- # found=1 00:05:44.430 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.430 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:44.430 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.430 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:44.430 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.688 17:52:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:44.688 17:52:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.688 17:52:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.688 17:52:01 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.688 17:52:01 -- setup/devices.sh@68 -- # return 0 00:05:44.688 17:52:01 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:44.688 17:52:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.688 17:52:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:44.688 17:52:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:44.688 17:52:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:44.688 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.688 00:05:44.688 real 0m6.018s 00:05:44.688 user 0m1.474s 00:05:44.688 sys 0m2.272s 00:05:44.688 17:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.688 ************************************ 00:05:44.688 17:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 END TEST nvme_mount 00:05:44.688 ************************************ 00:05:44.947 17:52:01 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:44.948 17:52:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.948 17:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.948 17:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.948 ************************************ 00:05:44.948 START TEST dm_mount 00:05:44.948 ************************************ 00:05:44.948 17:52:01 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:44.948 17:52:01 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:44.948 17:52:01 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:44.948 17:52:01 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:44.948 17:52:01 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:44.948 17:52:01 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:44.948 17:52:01 -- setup/common.sh@40 -- # local part_no=2 00:05:44.948 17:52:01 -- setup/common.sh@41 -- # local size=1073741824 00:05:44.948 17:52:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.948 17:52:01 -- setup/common.sh@44 -- # parts=() 00:05:44.948 17:52:01 -- setup/common.sh@44 -- # local parts 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.948 17:52:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.948 17:52:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.948 17:52:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.948 17:52:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.948 17:52:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:44.948 17:52:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:45.885 Creating new GPT entries in memory. 00:05:45.885 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.885 other utilities. 00:05:45.885 17:52:02 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.885 17:52:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.885 17:52:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.885 17:52:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.885 17:52:02 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:46.821 Creating new GPT entries in memory. 00:05:46.821 The operation has completed successfully. 00:05:46.821 17:52:03 -- setup/common.sh@57 -- # (( part++ )) 00:05:46.821 17:52:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.821 17:52:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.821 17:52:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.821 17:52:03 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:48.198 The operation has completed successfully. 00:05:48.198 17:52:04 -- setup/common.sh@57 -- # (( part++ )) 00:05:48.198 17:52:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.198 17:52:04 -- setup/common.sh@62 -- # wait 66833 00:05:48.198 17:52:04 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.198 17:52:04 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.198 17:52:04 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.198 17:52:04 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.198 17:52:04 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.198 17:52:04 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.198 17:52:04 -- setup/devices.sh@161 -- # break 00:05:48.198 17:52:04 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.198 17:52:04 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.198 17:52:04 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.198 17:52:04 -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.198 17:52:04 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:48.198 17:52:04 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:48.198 17:52:04 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.198 17:52:04 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:48.198 17:52:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.198 17:52:04 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.198 17:52:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.198 17:52:04 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.198 17:52:04 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.198 17:52:04 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:48.198 17:52:04 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:48.198 17:52:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.198 17:52:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.198 17:52:04 -- setup/devices.sh@53 -- # local found=0 00:05:48.198 17:52:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.198 17:52:04 -- setup/devices.sh@56 -- # : 00:05:48.198 17:52:04 -- setup/devices.sh@59 -- # local pci status 00:05:48.198 17:52:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:48.198 17:52:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.198 17:52:04 -- setup/devices.sh@47 -- # setup output config 00:05:48.198 17:52:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.198 17:52:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.198 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:48.198 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.456 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:48.456 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.715 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:48.715 17:52:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:48.715 17:52:05 -- setup/devices.sh@63 -- # found=1 00:05:48.715 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.715 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:48.715 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.974 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:48.974 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.233 17:52:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:49.233 17:52:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.233 17:52:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.233 17:52:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:49.233 17:52:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 17:52:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.233 17:52:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:49.233 17:52:06 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 17:52:06 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:49.233 17:52:06 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:49.233 17:52:06 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:49.233 17:52:06 -- setup/devices.sh@50 -- # local mount_point= 00:05:49.233 17:52:06 -- setup/devices.sh@51 -- # local test_file= 00:05:49.233 17:52:06 -- setup/devices.sh@53 -- # local found=0 00:05:49.233 17:52:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.233 17:52:06 -- setup/devices.sh@59 -- # local pci status 00:05:49.233 17:52:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.233 17:52:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:49.233 17:52:06 -- setup/devices.sh@47 -- # setup output config 00:05:49.233 17:52:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.233 17:52:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.492 17:52:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:49.492 17:52:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.751 17:52:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:49.751 17:52:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.011 17:52:06 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:50.011 17:52:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:50.011 17:52:06 -- setup/devices.sh@63 -- # found=1 00:05:50.011 17:52:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.011 17:52:06 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:50.011 17:52:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.270 17:52:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:50.270 17:52:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.270 17:52:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:50.270 17:52:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.530 17:52:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.530 17:52:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:50.530 17:52:07 -- setup/devices.sh@68 -- # return 0 00:05:50.530 17:52:07 -- setup/devices.sh@187 -- # cleanup_dm 00:05:50.530 17:52:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.530 17:52:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.530 17:52:07 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:50.530 17:52:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:50.530 17:52:07 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:50.530 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:50.530 17:52:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:50.530 17:52:07 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:50.530 00:05:50.530 real 0m5.674s 00:05:50.530 user 0m0.973s 00:05:50.530 sys 0m1.612s 00:05:50.530 17:52:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.530 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.530 ************************************ 00:05:50.530 END TEST dm_mount 00:05:50.530 ************************************ 00:05:50.530 17:52:07 -- setup/devices.sh@1 -- # cleanup 00:05:50.530 17:52:07 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:50.530 17:52:07 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.530 17:52:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:50.530 17:52:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:50.530 17:52:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:50.530 17:52:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:50.790 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.790 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.790 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:50.790 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:50.790 17:52:07 -- setup/devices.sh@12 -- # cleanup_dm 00:05:50.790 17:52:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.790 17:52:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.790 17:52:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:50.790 17:52:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:50.790 17:52:07 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:50.790 17:52:07 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:50.790 00:05:50.790 real 0m14.217s 00:05:50.790 user 0m3.513s 00:05:50.790 sys 0m5.027s 00:05:50.790 17:52:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.790 ************************************ 00:05:50.790 END TEST devices 00:05:50.790 ************************************ 00:05:50.790 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.049 ************************************ 00:05:51.049 END TEST setup.sh 00:05:51.049 ************************************ 00:05:51.049 00:05:51.049 real 0m51.679s 00:05:51.049 user 0m12.357s 00:05:51.049 sys 0m19.772s 00:05:51.049 17:52:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.049 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.049 17:52:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:51.308 Hugepages 00:05:51.308 node hugesize free / total 00:05:51.308 node0 1048576kB 0 / 0 00:05:51.308 node0 2048kB 2048 / 2048 00:05:51.308 00:05:51.308 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:51.308 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:51.566 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:51.566 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:51.824 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:51.824 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:51.824 17:52:08 -- spdk/autotest.sh@128 -- # uname -s 00:05:51.824 17:52:08 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:51.824 17:52:08 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:51.824 17:52:08 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.201 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.201 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.201 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.459 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.459 17:52:10 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:54.431 17:52:11 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:54.431 17:52:11 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:54.431 17:52:11 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:54.431 17:52:11 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:54.431 17:52:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:54.431 17:52:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:54.431 17:52:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.431 17:52:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:54.431 17:52:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:54.719 17:52:11 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:54.719 17:52:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:54.719 17:52:11 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.286 Waiting for block devices as requested 00:05:55.545 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.545 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.804 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.804 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:06:01.075 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:06:01.075 17:52:17 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:01.075 17:52:17 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:01.075 17:52:17 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.075 17:52:17 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:06:01.075 17:52:17 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:06:01.075 17:52:17 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:01.076 17:52:17 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1552 -- # continue 00:06:01.076 17:52:17 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:01.076 17:52:17 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1552 -- # continue 00:06:01.076 17:52:17 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:01.076 17:52:17 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1552 -- # continue 00:06:01.076 17:52:17 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:06:01.076 17:52:17 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:01.076 17:52:17 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:01.076 17:52:17 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:01.076 17:52:17 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:01.076 17:52:17 -- common/autotest_common.sh@1552 -- # continue 00:06:01.076 17:52:17 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:06:01.076 17:52:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.076 17:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.076 17:52:17 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:06:01.076 17:52:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.076 17:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.334 17:52:17 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.527 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.527 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.527 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.785 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.785 17:52:19 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:06:02.785 17:52:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.785 17:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 17:52:19 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:06:02.785 17:52:19 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:06:02.785 17:52:19 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:06:02.785 17:52:19 -- common/autotest_common.sh@1572 -- # bdfs=() 00:06:02.785 17:52:19 -- common/autotest_common.sh@1572 -- # local bdfs 00:06:02.785 17:52:19 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:06:02.785 17:52:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:02.785 17:52:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:02.785 17:52:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.785 17:52:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.785 17:52:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:03.044 17:52:19 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:06:03.044 17:52:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:06:03.044 17:52:19 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.044 17:52:19 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.044 17:52:19 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.044 17:52:19 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.044 17:52:19 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.044 17:52:19 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.044 17:52:19 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:06:03.044 17:52:19 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.044 17:52:19 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.044 17:52:19 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:06:03.044 17:52:19 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:06:03.044 17:52:19 -- common/autotest_common.sh@1588 -- # return 0 00:06:03.044 17:52:19 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:06:03.045 17:52:19 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:06:03.045 17:52:19 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:03.045 17:52:19 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:03.045 17:52:19 -- spdk/autotest.sh@160 -- # timing_enter lib 00:06:03.045 17:52:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.045 17:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.045 17:52:19 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.045 17:52:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.045 17:52:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.045 17:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.045 ************************************ 00:06:03.045 START TEST env 00:06:03.045 ************************************ 00:06:03.045 17:52:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.045 * Looking for test storage... 00:06:03.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:03.045 17:52:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:03.045 17:52:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:03.045 17:52:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.303 17:52:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.303 17:52:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:03.303 17:52:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:03.303 17:52:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:03.303 17:52:20 -- scripts/common.sh@335 -- # IFS=.-: 00:06:03.303 17:52:20 -- scripts/common.sh@335 -- # read -ra ver1 00:06:03.303 17:52:20 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.303 17:52:20 -- scripts/common.sh@336 -- # read -ra ver2 00:06:03.303 17:52:20 -- scripts/common.sh@337 -- # local 'op=<' 00:06:03.303 17:52:20 -- scripts/common.sh@339 -- # ver1_l=2 00:06:03.303 17:52:20 -- scripts/common.sh@340 -- # ver2_l=1 00:06:03.303 17:52:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:03.303 17:52:20 -- scripts/common.sh@343 -- # case "$op" in 00:06:03.303 17:52:20 -- scripts/common.sh@344 -- # : 1 00:06:03.303 17:52:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:03.303 17:52:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.304 17:52:20 -- scripts/common.sh@364 -- # decimal 1 00:06:03.304 17:52:20 -- scripts/common.sh@352 -- # local d=1 00:06:03.304 17:52:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.304 17:52:20 -- scripts/common.sh@354 -- # echo 1 00:06:03.304 17:52:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:03.304 17:52:20 -- scripts/common.sh@365 -- # decimal 2 00:06:03.304 17:52:20 -- scripts/common.sh@352 -- # local d=2 00:06:03.304 17:52:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.304 17:52:20 -- scripts/common.sh@354 -- # echo 2 00:06:03.304 17:52:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:03.304 17:52:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:03.304 17:52:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:03.304 17:52:20 -- scripts/common.sh@367 -- # return 0 00:06:03.304 17:52:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.304 17:52:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.304 --rc genhtml_branch_coverage=1 00:06:03.304 --rc genhtml_function_coverage=1 00:06:03.304 --rc genhtml_legend=1 00:06:03.304 --rc geninfo_all_blocks=1 00:06:03.304 --rc geninfo_unexecuted_blocks=1 00:06:03.304 00:06:03.304 ' 00:06:03.304 17:52:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.304 --rc genhtml_branch_coverage=1 00:06:03.304 --rc genhtml_function_coverage=1 00:06:03.304 --rc genhtml_legend=1 00:06:03.304 --rc geninfo_all_blocks=1 00:06:03.304 --rc geninfo_unexecuted_blocks=1 00:06:03.304 00:06:03.304 ' 00:06:03.304 17:52:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.304 --rc genhtml_branch_coverage=1 00:06:03.304 --rc genhtml_function_coverage=1 00:06:03.304 --rc genhtml_legend=1 00:06:03.304 --rc geninfo_all_blocks=1 00:06:03.304 --rc geninfo_unexecuted_blocks=1 00:06:03.304 00:06:03.304 ' 00:06:03.304 17:52:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.304 --rc genhtml_branch_coverage=1 00:06:03.304 --rc genhtml_function_coverage=1 00:06:03.304 --rc genhtml_legend=1 00:06:03.304 --rc geninfo_all_blocks=1 00:06:03.304 --rc geninfo_unexecuted_blocks=1 00:06:03.304 00:06:03.304 ' 00:06:03.304 17:52:20 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.304 17:52:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.304 17:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.304 17:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:03.304 ************************************ 00:06:03.304 START TEST env_memory 00:06:03.304 ************************************ 00:06:03.304 17:52:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.304 00:06:03.304 00:06:03.304 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.304 http://cunit.sourceforge.net/ 00:06:03.304 00:06:03.304 00:06:03.304 Suite: memory 00:06:03.304 Test: alloc and free memory map ...[2024-11-26 17:52:20.146669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:03.304 passed 00:06:03.304 Test: mem map translation ...[2024-11-26 17:52:20.192388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:03.304 [2024-11-26 17:52:20.192445] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:03.304 [2024-11-26 17:52:20.192523] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:03.304 [2024-11-26 17:52:20.192548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:03.563 passed 00:06:03.563 Test: mem map registration ...[2024-11-26 17:52:20.261916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:03.563 [2024-11-26 17:52:20.261970] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:03.563 passed 00:06:03.563 Test: mem map adjacent registrations ...passed 00:06:03.563 00:06:03.563 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.563 suites 1 1 n/a 0 0 00:06:03.563 tests 4 4 4 0 0 00:06:03.563 asserts 152 152 152 0 n/a 00:06:03.563 00:06:03.563 Elapsed time = 0.243 seconds 00:06:03.563 00:06:03.563 real 0m0.304s 00:06:03.563 user 0m0.256s 00:06:03.563 sys 0m0.033s 00:06:03.563 17:52:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.563 ************************************ 00:06:03.563 END TEST env_memory 00:06:03.564 ************************************ 00:06:03.564 17:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:03.564 17:52:20 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.564 17:52:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.564 17:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.564 17:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:03.564 ************************************ 00:06:03.564 START TEST env_vtophys 00:06:03.564 ************************************ 00:06:03.564 17:52:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.823 EAL: lib.eal log level changed from notice to debug 00:06:03.823 EAL: Detected lcore 0 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 1 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 2 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 3 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 4 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 5 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 6 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 7 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 8 as core 0 on socket 0 00:06:03.823 EAL: Detected lcore 9 as core 0 on socket 0 00:06:03.823 EAL: Maximum logical cores by configuration: 128 00:06:03.823 EAL: Detected CPU lcores: 10 00:06:03.823 EAL: Detected NUMA nodes: 1 00:06:03.823 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:03.823 EAL: Detected shared linkage of DPDK 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:03.823 EAL: Registered [vdev] bus. 00:06:03.823 EAL: bus.vdev log level changed from disabled to notice 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:03.823 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:03.823 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:03.823 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:03.824 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:03.824 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:03.824 EAL: No shared files mode enabled, IPC will be disabled 00:06:03.824 EAL: No shared files mode enabled, IPC is disabled 00:06:03.824 EAL: Selected IOVA mode 'PA' 00:06:03.824 EAL: Probing VFIO support... 00:06:03.824 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.824 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:03.824 EAL: Ask a virtual area of 0x2e000 bytes 00:06:03.824 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:03.824 EAL: Setting up physically contiguous memory... 00:06:03.824 EAL: Setting maximum number of open files to 524288 00:06:03.824 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:03.824 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:03.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.824 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:03.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.824 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:03.824 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:03.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.824 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:03.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.824 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:03.824 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:03.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.824 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:03.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.824 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:03.824 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:03.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.824 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:03.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.824 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:03.824 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:03.824 EAL: Hugepages will be freed exactly as allocated. 00:06:03.824 EAL: No shared files mode enabled, IPC is disabled 00:06:03.824 EAL: No shared files mode enabled, IPC is disabled 00:06:03.824 EAL: TSC frequency is ~2490000 KHz 00:06:03.824 EAL: Main lcore 0 is ready (tid=7f2838adba40;cpuset=[0]) 00:06:03.824 EAL: Trying to obtain current memory policy. 00:06:03.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.824 EAL: Restoring previous memory policy: 0 00:06:03.824 EAL: request: mp_malloc_sync 00:06:03.824 EAL: No shared files mode enabled, IPC is disabled 00:06:03.824 EAL: Heap on socket 0 was expanded by 2MB 00:06:03.824 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.824 EAL: No shared files mode enabled, IPC is disabled 00:06:03.824 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:03.824 EAL: Mem event callback 'spdk:(nil)' registered 00:06:03.824 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:03.824 00:06:03.824 00:06:03.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.824 http://cunit.sourceforge.net/ 00:06:03.824 00:06:03.824 00:06:03.824 Suite: components_suite 00:06:04.393 Test: vtophys_malloc_test ...passed 00:06:04.393 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:04.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.393 EAL: Restoring previous memory policy: 4 00:06:04.393 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.393 EAL: request: mp_malloc_sync 00:06:04.393 EAL: No shared files mode enabled, IPC is disabled 00:06:04.393 EAL: Heap on socket 0 was expanded by 4MB 00:06:04.393 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.393 EAL: request: mp_malloc_sync 00:06:04.393 EAL: No shared files mode enabled, IPC is disabled 00:06:04.393 EAL: Heap on socket 0 was shrunk by 4MB 00:06:04.393 EAL: Trying to obtain current memory policy. 00:06:04.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.393 EAL: Restoring previous memory policy: 4 00:06:04.393 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.393 EAL: request: mp_malloc_sync 00:06:04.393 EAL: No shared files mode enabled, IPC is disabled 00:06:04.393 EAL: Heap on socket 0 was expanded by 6MB 00:06:04.393 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.393 EAL: request: mp_malloc_sync 00:06:04.393 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 6MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 10MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 10MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 18MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 18MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 34MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 34MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 66MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 66MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 130MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was shrunk by 130MB 00:06:04.394 EAL: Trying to obtain current memory policy. 00:06:04.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.394 EAL: Restoring previous memory policy: 4 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.394 EAL: request: mp_malloc_sync 00:06:04.394 EAL: No shared files mode enabled, IPC is disabled 00:06:04.394 EAL: Heap on socket 0 was expanded by 258MB 00:06:04.394 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.654 EAL: request: mp_malloc_sync 00:06:04.654 EAL: No shared files mode enabled, IPC is disabled 00:06:04.654 EAL: Heap on socket 0 was shrunk by 258MB 00:06:04.654 EAL: Trying to obtain current memory policy. 00:06:04.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.654 EAL: Restoring previous memory policy: 4 00:06:04.654 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.654 EAL: request: mp_malloc_sync 00:06:04.654 EAL: No shared files mode enabled, IPC is disabled 00:06:04.654 EAL: Heap on socket 0 was expanded by 514MB 00:06:04.654 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.913 EAL: request: mp_malloc_sync 00:06:04.913 EAL: No shared files mode enabled, IPC is disabled 00:06:04.913 EAL: Heap on socket 0 was shrunk by 514MB 00:06:04.913 EAL: Trying to obtain current memory policy. 00:06:04.913 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.913 EAL: Restoring previous memory policy: 4 00:06:04.913 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.913 EAL: request: mp_malloc_sync 00:06:04.913 EAL: No shared files mode enabled, IPC is disabled 00:06:04.913 EAL: Heap on socket 0 was expanded by 1026MB 00:06:05.173 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.432 passed 00:06:05.432 00:06:05.432 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.432 suites 1 1 n/a 0 0 00:06:05.432 tests 2 2 2 0 0 00:06:05.432 asserts 5365 5365 5365 0 n/a 00:06:05.432 00:06:05.432 Elapsed time = 1.475 seconds 00:06:05.432 EAL: request: mp_malloc_sync 00:06:05.432 EAL: No shared files mode enabled, IPC is disabled 00:06:05.432 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.432 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.432 EAL: request: mp_malloc_sync 00:06:05.432 EAL: No shared files mode enabled, IPC is disabled 00:06:05.432 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.432 EAL: No shared files mode enabled, IPC is disabled 00:06:05.432 EAL: No shared files mode enabled, IPC is disabled 00:06:05.432 EAL: No shared files mode enabled, IPC is disabled 00:06:05.432 00:06:05.432 real 0m1.732s 00:06:05.432 user 0m0.808s 00:06:05.432 sys 0m0.789s 00:06:05.432 17:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.432 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.432 ************************************ 00:06:05.432 END TEST env_vtophys 00:06:05.432 ************************************ 00:06:05.432 17:52:22 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.432 17:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.432 17:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.432 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.432 ************************************ 00:06:05.432 START TEST env_pci 00:06:05.432 ************************************ 00:06:05.432 17:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.432 00:06:05.432 00:06:05.432 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.432 http://cunit.sourceforge.net/ 00:06:05.432 00:06:05.433 00:06:05.433 Suite: pci 00:06:05.433 Test: pci_hook ...[2024-11-26 17:52:22.282446] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68539 has claimed it 00:06:05.433 passed 00:06:05.433 00:06:05.433 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.433 suites 1 1 n/a 0 0 00:06:05.433 tests 1 1 1 0 0 00:06:05.433 asserts 25 25 25 0 n/a 00:06:05.433 00:06:05.433 Elapsed time = 0.007 seconds 00:06:05.433 EAL: Cannot find device (10000:00:01.0) 00:06:05.433 EAL: Failed to attach device on primary process 00:06:05.433 00:06:05.433 real 0m0.095s 00:06:05.433 user 0m0.041s 00:06:05.433 sys 0m0.054s 00:06:05.433 17:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.433 ************************************ 00:06:05.433 END TEST env_pci 00:06:05.433 ************************************ 00:06:05.433 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.692 17:52:22 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:05.692 17:52:22 -- env/env.sh@15 -- # uname 00:06:05.692 17:52:22 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:05.692 17:52:22 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:05.692 17:52:22 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.692 17:52:22 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:05.692 17:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.692 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.692 ************************************ 00:06:05.692 START TEST env_dpdk_post_init 00:06:05.692 ************************************ 00:06:05.692 17:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.692 EAL: Detected CPU lcores: 10 00:06:05.692 EAL: Detected NUMA nodes: 1 00:06:05.692 EAL: Detected shared linkage of DPDK 00:06:05.692 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.692 EAL: Selected IOVA mode 'PA' 00:06:05.692 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:06:05.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:06:05.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:06:05.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:06:05.951 Starting DPDK initialization... 00:06:05.951 Starting SPDK post initialization... 00:06:05.951 SPDK NVMe probe 00:06:05.951 Attaching to 0000:00:06.0 00:06:05.951 Attaching to 0000:00:07.0 00:06:05.951 Attaching to 0000:00:08.0 00:06:05.951 Attaching to 0000:00:09.0 00:06:05.951 Attached to 0000:00:06.0 00:06:05.951 Attached to 0000:00:07.0 00:06:05.951 Attached to 0000:00:09.0 00:06:05.951 Attached to 0000:00:08.0 00:06:05.951 Cleaning up... 00:06:05.951 00:06:05.951 real 0m0.252s 00:06:05.951 user 0m0.073s 00:06:05.951 sys 0m0.082s 00:06:05.951 17:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.951 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.951 ************************************ 00:06:05.951 END TEST env_dpdk_post_init 00:06:05.951 ************************************ 00:06:05.951 17:52:22 -- env/env.sh@26 -- # uname 00:06:05.951 17:52:22 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.951 17:52:22 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.951 17:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.951 17:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.951 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.951 ************************************ 00:06:05.951 START TEST env_mem_callbacks 00:06:05.951 ************************************ 00:06:05.951 17:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.951 EAL: Detected CPU lcores: 10 00:06:05.951 EAL: Detected NUMA nodes: 1 00:06:05.951 EAL: Detected shared linkage of DPDK 00:06:05.951 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.951 EAL: Selected IOVA mode 'PA' 00:06:06.211 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.211 00:06:06.211 00:06:06.211 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.211 http://cunit.sourceforge.net/ 00:06:06.211 00:06:06.211 00:06:06.211 Suite: memory 00:06:06.211 Test: test ... 00:06:06.211 register 0x200000200000 2097152 00:06:06.211 malloc 3145728 00:06:06.211 register 0x200000400000 4194304 00:06:06.211 buf 0x200000500000 len 3145728 PASSED 00:06:06.211 malloc 64 00:06:06.211 buf 0x2000004fff40 len 64 PASSED 00:06:06.211 malloc 4194304 00:06:06.211 register 0x200000800000 6291456 00:06:06.211 buf 0x200000a00000 len 4194304 PASSED 00:06:06.211 free 0x200000500000 3145728 00:06:06.211 free 0x2000004fff40 64 00:06:06.211 unregister 0x200000400000 4194304 PASSED 00:06:06.211 free 0x200000a00000 4194304 00:06:06.211 unregister 0x200000800000 6291456 PASSED 00:06:06.211 malloc 8388608 00:06:06.211 register 0x200000400000 10485760 00:06:06.211 buf 0x200000600000 len 8388608 PASSED 00:06:06.211 free 0x200000600000 8388608 00:06:06.211 unregister 0x200000400000 10485760 PASSED 00:06:06.211 passed 00:06:06.211 00:06:06.211 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.211 suites 1 1 n/a 0 0 00:06:06.211 tests 1 1 1 0 0 00:06:06.211 asserts 15 15 15 0 n/a 00:06:06.211 00:06:06.211 Elapsed time = 0.011 seconds 00:06:06.211 00:06:06.211 real 0m0.194s 00:06:06.211 user 0m0.038s 00:06:06.211 sys 0m0.054s 00:06:06.211 17:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.211 17:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.211 ************************************ 00:06:06.211 END TEST env_mem_callbacks 00:06:06.211 ************************************ 00:06:06.211 00:06:06.211 real 0m3.168s 00:06:06.211 user 0m1.473s 00:06:06.211 sys 0m1.350s 00:06:06.211 17:52:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.211 17:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.211 ************************************ 00:06:06.211 END TEST env 00:06:06.211 ************************************ 00:06:06.211 17:52:23 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.211 17:52:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.211 17:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.211 17:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.211 ************************************ 00:06:06.211 START TEST rpc 00:06:06.211 ************************************ 00:06:06.211 17:52:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.470 * Looking for test storage... 00:06:06.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.470 17:52:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:06.470 17:52:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:06.470 17:52:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:06.470 17:52:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:06.470 17:52:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:06.470 17:52:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:06.470 17:52:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:06.470 17:52:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:06.470 17:52:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:06.470 17:52:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.470 17:52:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:06.470 17:52:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:06.470 17:52:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:06.470 17:52:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:06.470 17:52:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:06.470 17:52:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:06.470 17:52:23 -- scripts/common.sh@344 -- # : 1 00:06:06.470 17:52:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:06.470 17:52:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.470 17:52:23 -- scripts/common.sh@364 -- # decimal 1 00:06:06.470 17:52:23 -- scripts/common.sh@352 -- # local d=1 00:06:06.470 17:52:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.470 17:52:23 -- scripts/common.sh@354 -- # echo 1 00:06:06.470 17:52:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:06.470 17:52:23 -- scripts/common.sh@365 -- # decimal 2 00:06:06.470 17:52:23 -- scripts/common.sh@352 -- # local d=2 00:06:06.470 17:52:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.470 17:52:23 -- scripts/common.sh@354 -- # echo 2 00:06:06.470 17:52:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:06.470 17:52:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:06.470 17:52:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:06.470 17:52:23 -- scripts/common.sh@367 -- # return 0 00:06:06.470 17:52:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.470 17:52:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.470 --rc genhtml_branch_coverage=1 00:06:06.470 --rc genhtml_function_coverage=1 00:06:06.470 --rc genhtml_legend=1 00:06:06.470 --rc geninfo_all_blocks=1 00:06:06.470 --rc geninfo_unexecuted_blocks=1 00:06:06.470 00:06:06.470 ' 00:06:06.470 17:52:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:06.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.470 --rc genhtml_branch_coverage=1 00:06:06.470 --rc genhtml_function_coverage=1 00:06:06.470 --rc genhtml_legend=1 00:06:06.470 --rc geninfo_all_blocks=1 00:06:06.470 --rc geninfo_unexecuted_blocks=1 00:06:06.470 00:06:06.470 ' 00:06:06.471 17:52:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:06.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.471 --rc genhtml_branch_coverage=1 00:06:06.471 --rc genhtml_function_coverage=1 00:06:06.471 --rc genhtml_legend=1 00:06:06.471 --rc geninfo_all_blocks=1 00:06:06.471 --rc geninfo_unexecuted_blocks=1 00:06:06.471 00:06:06.471 ' 00:06:06.471 17:52:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:06.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.471 --rc genhtml_branch_coverage=1 00:06:06.471 --rc genhtml_function_coverage=1 00:06:06.471 --rc genhtml_legend=1 00:06:06.471 --rc geninfo_all_blocks=1 00:06:06.471 --rc geninfo_unexecuted_blocks=1 00:06:06.471 00:06:06.471 ' 00:06:06.471 17:52:23 -- rpc/rpc.sh@65 -- # spdk_pid=68664 00:06:06.471 17:52:23 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:06.471 17:52:23 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.471 17:52:23 -- rpc/rpc.sh@67 -- # waitforlisten 68664 00:06:06.471 17:52:23 -- common/autotest_common.sh@829 -- # '[' -z 68664 ']' 00:06:06.471 17:52:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.471 17:52:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.471 17:52:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.471 17:52:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.471 17:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.471 [2024-11-26 17:52:23.393178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.471 [2024-11-26 17:52:23.393345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68664 ] 00:06:06.741 [2024-11-26 17:52:23.546130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.741 [2024-11-26 17:52:23.590717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.741 [2024-11-26 17:52:23.590904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:06.741 [2024-11-26 17:52:23.590931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68664' to capture a snapshot of events at runtime. 00:06:06.741 [2024-11-26 17:52:23.590952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68664 for offline analysis/debug. 00:06:06.741 [2024-11-26 17:52:23.590994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.321 17:52:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.321 17:52:24 -- common/autotest_common.sh@862 -- # return 0 00:06:07.321 17:52:24 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.321 17:52:24 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.321 17:52:24 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:07.321 17:52:24 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:07.321 17:52:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.321 17:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.321 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.321 ************************************ 00:06:07.322 START TEST rpc_integrity 00:06:07.322 ************************************ 00:06:07.322 17:52:24 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:07.322 17:52:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.322 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.322 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.322 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.322 17:52:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.322 17:52:24 -- rpc/rpc.sh@13 -- # jq length 00:06:07.580 17:52:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.580 17:52:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.580 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.580 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.580 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.580 17:52:24 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:07.580 17:52:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.580 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.581 { 00:06:07.581 "name": "Malloc0", 00:06:07.581 "aliases": [ 00:06:07.581 "500830d9-4f28-45ed-b2e3-ff62d77a79ec" 00:06:07.581 ], 00:06:07.581 "product_name": "Malloc disk", 00:06:07.581 "block_size": 512, 00:06:07.581 "num_blocks": 16384, 00:06:07.581 "uuid": "500830d9-4f28-45ed-b2e3-ff62d77a79ec", 00:06:07.581 "assigned_rate_limits": { 00:06:07.581 "rw_ios_per_sec": 0, 00:06:07.581 "rw_mbytes_per_sec": 0, 00:06:07.581 "r_mbytes_per_sec": 0, 00:06:07.581 "w_mbytes_per_sec": 0 00:06:07.581 }, 00:06:07.581 "claimed": false, 00:06:07.581 "zoned": false, 00:06:07.581 "supported_io_types": { 00:06:07.581 "read": true, 00:06:07.581 "write": true, 00:06:07.581 "unmap": true, 00:06:07.581 "write_zeroes": true, 00:06:07.581 "flush": true, 00:06:07.581 "reset": true, 00:06:07.581 "compare": false, 00:06:07.581 "compare_and_write": false, 00:06:07.581 "abort": true, 00:06:07.581 "nvme_admin": false, 00:06:07.581 "nvme_io": false 00:06:07.581 }, 00:06:07.581 "memory_domains": [ 00:06:07.581 { 00:06:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.581 "dma_device_type": 2 00:06:07.581 } 00:06:07.581 ], 00:06:07.581 "driver_specific": {} 00:06:07.581 } 00:06:07.581 ]' 00:06:07.581 17:52:24 -- rpc/rpc.sh@17 -- # jq length 00:06:07.581 17:52:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.581 17:52:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.581 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 [2024-11-26 17:52:24.344473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.581 [2024-11-26 17:52:24.344536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.581 [2024-11-26 17:52:24.344566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:07.581 [2024-11-26 17:52:24.344584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.581 [2024-11-26 17:52:24.347174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.581 [2024-11-26 17:52:24.347217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.581 Passthru0 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.581 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.581 { 00:06:07.581 "name": "Malloc0", 00:06:07.581 "aliases": [ 00:06:07.581 "500830d9-4f28-45ed-b2e3-ff62d77a79ec" 00:06:07.581 ], 00:06:07.581 "product_name": "Malloc disk", 00:06:07.581 "block_size": 512, 00:06:07.581 "num_blocks": 16384, 00:06:07.581 "uuid": "500830d9-4f28-45ed-b2e3-ff62d77a79ec", 00:06:07.581 "assigned_rate_limits": { 00:06:07.581 "rw_ios_per_sec": 0, 00:06:07.581 "rw_mbytes_per_sec": 0, 00:06:07.581 "r_mbytes_per_sec": 0, 00:06:07.581 "w_mbytes_per_sec": 0 00:06:07.581 }, 00:06:07.581 "claimed": true, 00:06:07.581 "claim_type": "exclusive_write", 00:06:07.581 "zoned": false, 00:06:07.581 "supported_io_types": { 00:06:07.581 "read": true, 00:06:07.581 "write": true, 00:06:07.581 "unmap": true, 00:06:07.581 "write_zeroes": true, 00:06:07.581 "flush": true, 00:06:07.581 "reset": true, 00:06:07.581 "compare": false, 00:06:07.581 "compare_and_write": false, 00:06:07.581 "abort": true, 00:06:07.581 "nvme_admin": false, 00:06:07.581 "nvme_io": false 00:06:07.581 }, 00:06:07.581 "memory_domains": [ 00:06:07.581 { 00:06:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.581 "dma_device_type": 2 00:06:07.581 } 00:06:07.581 ], 00:06:07.581 "driver_specific": {} 00:06:07.581 }, 00:06:07.581 { 00:06:07.581 "name": "Passthru0", 00:06:07.581 "aliases": [ 00:06:07.581 "29f152e4-fa35-5bd4-8b86-a4b4337a8799" 00:06:07.581 ], 00:06:07.581 "product_name": "passthru", 00:06:07.581 "block_size": 512, 00:06:07.581 "num_blocks": 16384, 00:06:07.581 "uuid": "29f152e4-fa35-5bd4-8b86-a4b4337a8799", 00:06:07.581 "assigned_rate_limits": { 00:06:07.581 "rw_ios_per_sec": 0, 00:06:07.581 "rw_mbytes_per_sec": 0, 00:06:07.581 "r_mbytes_per_sec": 0, 00:06:07.581 "w_mbytes_per_sec": 0 00:06:07.581 }, 00:06:07.581 "claimed": false, 00:06:07.581 "zoned": false, 00:06:07.581 "supported_io_types": { 00:06:07.581 "read": true, 00:06:07.581 "write": true, 00:06:07.581 "unmap": true, 00:06:07.581 "write_zeroes": true, 00:06:07.581 "flush": true, 00:06:07.581 "reset": true, 00:06:07.581 "compare": false, 00:06:07.581 "compare_and_write": false, 00:06:07.581 "abort": true, 00:06:07.581 "nvme_admin": false, 00:06:07.581 "nvme_io": false 00:06:07.581 }, 00:06:07.581 "memory_domains": [ 00:06:07.581 { 00:06:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.581 "dma_device_type": 2 00:06:07.581 } 00:06:07.581 ], 00:06:07.581 "driver_specific": { 00:06:07.581 "passthru": { 00:06:07.581 "name": "Passthru0", 00:06:07.581 "base_bdev_name": "Malloc0" 00:06:07.581 } 00:06:07.581 } 00:06:07.581 } 00:06:07.581 ]' 00:06:07.581 17:52:24 -- rpc/rpc.sh@21 -- # jq length 00:06:07.581 17:52:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.581 17:52:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.581 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.581 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.581 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.581 17:52:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.581 17:52:24 -- rpc/rpc.sh@26 -- # jq length 00:06:07.581 17:52:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.581 00:06:07.581 real 0m0.273s 00:06:07.581 user 0m0.157s 00:06:07.581 sys 0m0.054s 00:06:07.581 17:52:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.581 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.581 ************************************ 00:06:07.581 END TEST rpc_integrity 00:06:07.581 ************************************ 00:06:07.840 17:52:24 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.840 17:52:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.840 17:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.840 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 ************************************ 00:06:07.840 START TEST rpc_plugins 00:06:07.840 ************************************ 00:06:07.840 17:52:24 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:07.840 17:52:24 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.840 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.840 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.840 17:52:24 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.840 17:52:24 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.840 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.840 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.840 17:52:24 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.840 { 00:06:07.840 "name": "Malloc1", 00:06:07.840 "aliases": [ 00:06:07.841 "4632efee-ef16-4670-9fd6-31d64c27dad7" 00:06:07.841 ], 00:06:07.841 "product_name": "Malloc disk", 00:06:07.841 "block_size": 4096, 00:06:07.841 "num_blocks": 256, 00:06:07.841 "uuid": "4632efee-ef16-4670-9fd6-31d64c27dad7", 00:06:07.841 "assigned_rate_limits": { 00:06:07.841 "rw_ios_per_sec": 0, 00:06:07.841 "rw_mbytes_per_sec": 0, 00:06:07.841 "r_mbytes_per_sec": 0, 00:06:07.841 "w_mbytes_per_sec": 0 00:06:07.841 }, 00:06:07.841 "claimed": false, 00:06:07.841 "zoned": false, 00:06:07.841 "supported_io_types": { 00:06:07.841 "read": true, 00:06:07.841 "write": true, 00:06:07.841 "unmap": true, 00:06:07.841 "write_zeroes": true, 00:06:07.841 "flush": true, 00:06:07.841 "reset": true, 00:06:07.841 "compare": false, 00:06:07.841 "compare_and_write": false, 00:06:07.841 "abort": true, 00:06:07.841 "nvme_admin": false, 00:06:07.841 "nvme_io": false 00:06:07.841 }, 00:06:07.841 "memory_domains": [ 00:06:07.841 { 00:06:07.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.841 "dma_device_type": 2 00:06:07.841 } 00:06:07.841 ], 00:06:07.841 "driver_specific": {} 00:06:07.841 } 00:06:07.841 ]' 00:06:07.841 17:52:24 -- rpc/rpc.sh@32 -- # jq length 00:06:07.841 17:52:24 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.841 17:52:24 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.841 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.841 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.841 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.841 17:52:24 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.841 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.841 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.841 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.841 17:52:24 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.841 17:52:24 -- rpc/rpc.sh@36 -- # jq length 00:06:07.841 17:52:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.841 00:06:07.841 real 0m0.150s 00:06:07.841 user 0m0.080s 00:06:07.841 sys 0m0.033s 00:06:07.841 17:52:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.841 ************************************ 00:06:07.841 END TEST rpc_plugins 00:06:07.841 ************************************ 00:06:07.841 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.841 17:52:24 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.841 17:52:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.841 17:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.841 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.100 ************************************ 00:06:08.100 START TEST rpc_trace_cmd_test 00:06:08.100 ************************************ 00:06:08.100 17:52:24 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:08.100 17:52:24 -- rpc/rpc.sh@40 -- # local info 00:06:08.100 17:52:24 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:08.100 17:52:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.100 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.100 17:52:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.100 17:52:24 -- rpc/rpc.sh@42 -- # info='{ 00:06:08.100 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68664", 00:06:08.100 "tpoint_group_mask": "0x8", 00:06:08.100 "iscsi_conn": { 00:06:08.100 "mask": "0x2", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "scsi": { 00:06:08.100 "mask": "0x4", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "bdev": { 00:06:08.100 "mask": "0x8", 00:06:08.100 "tpoint_mask": "0xffffffffffffffff" 00:06:08.100 }, 00:06:08.100 "nvmf_rdma": { 00:06:08.100 "mask": "0x10", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "nvmf_tcp": { 00:06:08.100 "mask": "0x20", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "ftl": { 00:06:08.100 "mask": "0x40", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "blobfs": { 00:06:08.100 "mask": "0x80", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "dsa": { 00:06:08.100 "mask": "0x200", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "thread": { 00:06:08.100 "mask": "0x400", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "nvme_pcie": { 00:06:08.100 "mask": "0x800", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "iaa": { 00:06:08.100 "mask": "0x1000", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "nvme_tcp": { 00:06:08.100 "mask": "0x2000", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 }, 00:06:08.100 "bdev_nvme": { 00:06:08.100 "mask": "0x4000", 00:06:08.100 "tpoint_mask": "0x0" 00:06:08.100 } 00:06:08.100 }' 00:06:08.100 17:52:24 -- rpc/rpc.sh@43 -- # jq length 00:06:08.100 17:52:24 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:08.100 17:52:24 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:08.100 17:52:24 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:08.100 17:52:24 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:08.100 17:52:24 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:08.100 17:52:24 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:08.100 17:52:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:08.100 17:52:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:08.100 17:52:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:08.100 00:06:08.100 real 0m0.213s 00:06:08.100 user 0m0.173s 00:06:08.100 sys 0m0.029s 00:06:08.100 17:52:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.100 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.100 ************************************ 00:06:08.100 END TEST rpc_trace_cmd_test 00:06:08.100 ************************************ 00:06:08.359 17:52:25 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:08.359 17:52:25 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:08.359 17:52:25 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:08.359 17:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.359 17:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.359 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.359 ************************************ 00:06:08.359 START TEST rpc_daemon_integrity 00:06:08.359 ************************************ 00:06:08.359 17:52:25 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:08.360 17:52:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.360 17:52:25 -- rpc/rpc.sh@13 -- # jq length 00:06:08.360 17:52:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.360 17:52:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:08.360 17:52:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.360 { 00:06:08.360 "name": "Malloc2", 00:06:08.360 "aliases": [ 00:06:08.360 "de1ac50c-6d40-48b1-8830-944c72498557" 00:06:08.360 ], 00:06:08.360 "product_name": "Malloc disk", 00:06:08.360 "block_size": 512, 00:06:08.360 "num_blocks": 16384, 00:06:08.360 "uuid": "de1ac50c-6d40-48b1-8830-944c72498557", 00:06:08.360 "assigned_rate_limits": { 00:06:08.360 "rw_ios_per_sec": 0, 00:06:08.360 "rw_mbytes_per_sec": 0, 00:06:08.360 "r_mbytes_per_sec": 0, 00:06:08.360 "w_mbytes_per_sec": 0 00:06:08.360 }, 00:06:08.360 "claimed": false, 00:06:08.360 "zoned": false, 00:06:08.360 "supported_io_types": { 00:06:08.360 "read": true, 00:06:08.360 "write": true, 00:06:08.360 "unmap": true, 00:06:08.360 "write_zeroes": true, 00:06:08.360 "flush": true, 00:06:08.360 "reset": true, 00:06:08.360 "compare": false, 00:06:08.360 "compare_and_write": false, 00:06:08.360 "abort": true, 00:06:08.360 "nvme_admin": false, 00:06:08.360 "nvme_io": false 00:06:08.360 }, 00:06:08.360 "memory_domains": [ 00:06:08.360 { 00:06:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.360 "dma_device_type": 2 00:06:08.360 } 00:06:08.360 ], 00:06:08.360 "driver_specific": {} 00:06:08.360 } 00:06:08.360 ]' 00:06:08.360 17:52:25 -- rpc/rpc.sh@17 -- # jq length 00:06:08.360 17:52:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.360 17:52:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 [2024-11-26 17:52:25.188513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:08.360 [2024-11-26 17:52:25.188579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.360 [2024-11-26 17:52:25.188600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:08.360 [2024-11-26 17:52:25.188619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.360 [2024-11-26 17:52:25.191136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.360 [2024-11-26 17:52:25.191186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.360 Passthru0 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.360 { 00:06:08.360 "name": "Malloc2", 00:06:08.360 "aliases": [ 00:06:08.360 "de1ac50c-6d40-48b1-8830-944c72498557" 00:06:08.360 ], 00:06:08.360 "product_name": "Malloc disk", 00:06:08.360 "block_size": 512, 00:06:08.360 "num_blocks": 16384, 00:06:08.360 "uuid": "de1ac50c-6d40-48b1-8830-944c72498557", 00:06:08.360 "assigned_rate_limits": { 00:06:08.360 "rw_ios_per_sec": 0, 00:06:08.360 "rw_mbytes_per_sec": 0, 00:06:08.360 "r_mbytes_per_sec": 0, 00:06:08.360 "w_mbytes_per_sec": 0 00:06:08.360 }, 00:06:08.360 "claimed": true, 00:06:08.360 "claim_type": "exclusive_write", 00:06:08.360 "zoned": false, 00:06:08.360 "supported_io_types": { 00:06:08.360 "read": true, 00:06:08.360 "write": true, 00:06:08.360 "unmap": true, 00:06:08.360 "write_zeroes": true, 00:06:08.360 "flush": true, 00:06:08.360 "reset": true, 00:06:08.360 "compare": false, 00:06:08.360 "compare_and_write": false, 00:06:08.360 "abort": true, 00:06:08.360 "nvme_admin": false, 00:06:08.360 "nvme_io": false 00:06:08.360 }, 00:06:08.360 "memory_domains": [ 00:06:08.360 { 00:06:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.360 "dma_device_type": 2 00:06:08.360 } 00:06:08.360 ], 00:06:08.360 "driver_specific": {} 00:06:08.360 }, 00:06:08.360 { 00:06:08.360 "name": "Passthru0", 00:06:08.360 "aliases": [ 00:06:08.360 "f65da006-e0be-5f4a-abe8-091996a75ce4" 00:06:08.360 ], 00:06:08.360 "product_name": "passthru", 00:06:08.360 "block_size": 512, 00:06:08.360 "num_blocks": 16384, 00:06:08.360 "uuid": "f65da006-e0be-5f4a-abe8-091996a75ce4", 00:06:08.360 "assigned_rate_limits": { 00:06:08.360 "rw_ios_per_sec": 0, 00:06:08.360 "rw_mbytes_per_sec": 0, 00:06:08.360 "r_mbytes_per_sec": 0, 00:06:08.360 "w_mbytes_per_sec": 0 00:06:08.360 }, 00:06:08.360 "claimed": false, 00:06:08.360 "zoned": false, 00:06:08.360 "supported_io_types": { 00:06:08.360 "read": true, 00:06:08.360 "write": true, 00:06:08.360 "unmap": true, 00:06:08.360 "write_zeroes": true, 00:06:08.360 "flush": true, 00:06:08.360 "reset": true, 00:06:08.360 "compare": false, 00:06:08.360 "compare_and_write": false, 00:06:08.360 "abort": true, 00:06:08.360 "nvme_admin": false, 00:06:08.360 "nvme_io": false 00:06:08.360 }, 00:06:08.360 "memory_domains": [ 00:06:08.360 { 00:06:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.360 "dma_device_type": 2 00:06:08.360 } 00:06:08.360 ], 00:06:08.360 "driver_specific": { 00:06:08.360 "passthru": { 00:06:08.360 "name": "Passthru0", 00:06:08.360 "base_bdev_name": "Malloc2" 00:06:08.360 } 00:06:08.360 } 00:06:08.360 } 00:06:08.360 ]' 00:06:08.360 17:52:25 -- rpc/rpc.sh@21 -- # jq length 00:06:08.360 17:52:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.360 17:52:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.360 17:52:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.360 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 17:52:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.360 17:52:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.619 17:52:25 -- rpc/rpc.sh@26 -- # jq length 00:06:08.619 17:52:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.619 00:06:08.619 real 0m0.275s 00:06:08.619 user 0m0.147s 00:06:08.619 sys 0m0.056s 00:06:08.619 17:52:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.619 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.619 ************************************ 00:06:08.619 END TEST rpc_daemon_integrity 00:06:08.619 ************************************ 00:06:08.619 17:52:25 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:08.619 17:52:25 -- rpc/rpc.sh@84 -- # killprocess 68664 00:06:08.619 17:52:25 -- common/autotest_common.sh@936 -- # '[' -z 68664 ']' 00:06:08.619 17:52:25 -- common/autotest_common.sh@940 -- # kill -0 68664 00:06:08.619 17:52:25 -- common/autotest_common.sh@941 -- # uname 00:06:08.619 17:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.619 17:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68664 00:06:08.620 17:52:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.620 17:52:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.620 killing process with pid 68664 00:06:08.620 17:52:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68664' 00:06:08.620 17:52:25 -- common/autotest_common.sh@955 -- # kill 68664 00:06:08.620 17:52:25 -- common/autotest_common.sh@960 -- # wait 68664 00:06:09.188 00:06:09.188 real 0m2.728s 00:06:09.188 user 0m3.175s 00:06:09.188 sys 0m0.864s 00:06:09.188 17:52:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.188 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:09.188 ************************************ 00:06:09.188 END TEST rpc 00:06:09.189 ************************************ 00:06:09.189 17:52:25 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.189 17:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.189 17:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.189 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:09.189 ************************************ 00:06:09.189 START TEST rpc_client 00:06:09.189 ************************************ 00:06:09.189 17:52:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.189 * Looking for test storage... 00:06:09.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:09.189 17:52:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.189 17:52:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.189 17:52:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.189 17:52:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.189 17:52:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.189 17:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.189 17:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.189 17:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.189 17:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.189 17:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.189 17:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.189 17:52:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.189 17:52:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.189 17:52:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.189 17:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.189 17:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.189 17:52:26 -- scripts/common.sh@344 -- # : 1 00:06:09.189 17:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.189 17:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.189 17:52:26 -- scripts/common.sh@364 -- # decimal 1 00:06:09.189 17:52:26 -- scripts/common.sh@352 -- # local d=1 00:06:09.189 17:52:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.189 17:52:26 -- scripts/common.sh@354 -- # echo 1 00:06:09.189 17:52:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.189 17:52:26 -- scripts/common.sh@365 -- # decimal 2 00:06:09.189 17:52:26 -- scripts/common.sh@352 -- # local d=2 00:06:09.189 17:52:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.189 17:52:26 -- scripts/common.sh@354 -- # echo 2 00:06:09.189 17:52:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.189 17:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.189 17:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.189 17:52:26 -- scripts/common.sh@367 -- # return 0 00:06:09.189 17:52:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.189 17:52:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 17:52:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 17:52:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 17:52:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 17:52:26 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:09.449 OK 00:06:09.449 17:52:26 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:09.449 00:06:09.449 real 0m0.282s 00:06:09.449 user 0m0.150s 00:06:09.449 sys 0m0.152s 00:06:09.449 17:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.449 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.449 ************************************ 00:06:09.449 END TEST rpc_client 00:06:09.449 ************************************ 00:06:09.449 17:52:26 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.449 17:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.449 17:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.449 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.449 ************************************ 00:06:09.449 START TEST json_config 00:06:09.449 ************************************ 00:06:09.449 17:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.449 17:52:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.449 17:52:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.449 17:52:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.709 17:52:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.709 17:52:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.709 17:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.709 17:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.709 17:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.709 17:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.709 17:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.709 17:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.709 17:52:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.709 17:52:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.709 17:52:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.709 17:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.709 17:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.709 17:52:26 -- scripts/common.sh@344 -- # : 1 00:06:09.709 17:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.709 17:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.709 17:52:26 -- scripts/common.sh@364 -- # decimal 1 00:06:09.709 17:52:26 -- scripts/common.sh@352 -- # local d=1 00:06:09.709 17:52:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.709 17:52:26 -- scripts/common.sh@354 -- # echo 1 00:06:09.709 17:52:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.709 17:52:26 -- scripts/common.sh@365 -- # decimal 2 00:06:09.709 17:52:26 -- scripts/common.sh@352 -- # local d=2 00:06:09.709 17:52:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.709 17:52:26 -- scripts/common.sh@354 -- # echo 2 00:06:09.709 17:52:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.709 17:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.709 17:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.709 17:52:26 -- scripts/common.sh@367 -- # return 0 00:06:09.709 17:52:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.709 17:52:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.709 --rc genhtml_branch_coverage=1 00:06:09.709 --rc genhtml_function_coverage=1 00:06:09.709 --rc genhtml_legend=1 00:06:09.709 --rc geninfo_all_blocks=1 00:06:09.709 --rc geninfo_unexecuted_blocks=1 00:06:09.709 00:06:09.709 ' 00:06:09.709 17:52:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.709 --rc genhtml_branch_coverage=1 00:06:09.709 --rc genhtml_function_coverage=1 00:06:09.709 --rc genhtml_legend=1 00:06:09.709 --rc geninfo_all_blocks=1 00:06:09.709 --rc geninfo_unexecuted_blocks=1 00:06:09.709 00:06:09.709 ' 00:06:09.709 17:52:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.709 --rc genhtml_branch_coverage=1 00:06:09.709 --rc genhtml_function_coverage=1 00:06:09.709 --rc genhtml_legend=1 00:06:09.709 --rc geninfo_all_blocks=1 00:06:09.710 --rc geninfo_unexecuted_blocks=1 00:06:09.710 00:06:09.710 ' 00:06:09.710 17:52:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.710 --rc genhtml_branch_coverage=1 00:06:09.710 --rc genhtml_function_coverage=1 00:06:09.710 --rc genhtml_legend=1 00:06:09.710 --rc geninfo_all_blocks=1 00:06:09.710 --rc geninfo_unexecuted_blocks=1 00:06:09.710 00:06:09.710 ' 00:06:09.710 17:52:26 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.710 17:52:26 -- nvmf/common.sh@7 -- # uname -s 00:06:09.710 17:52:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.710 17:52:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.710 17:52:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.710 17:52:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.710 17:52:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.710 17:52:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.710 17:52:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.710 17:52:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.710 17:52:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.710 17:52:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.710 17:52:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:06:09.710 17:52:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:06:09.710 17:52:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.710 17:52:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.710 17:52:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.710 17:52:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.710 17:52:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.710 17:52:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.710 17:52:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.710 17:52:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.710 17:52:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.710 17:52:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.710 17:52:26 -- paths/export.sh@5 -- # export PATH 00:06:09.710 17:52:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.710 17:52:26 -- nvmf/common.sh@46 -- # : 0 00:06:09.710 17:52:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:09.710 17:52:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:09.710 17:52:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:09.710 17:52:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.710 17:52:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.710 17:52:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:09.710 17:52:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:09.710 17:52:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:09.710 17:52:26 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:09.710 17:52:26 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:09.710 17:52:26 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:09.710 17:52:26 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:09.710 WARNING: No tests are enabled so not running JSON configuration tests 00:06:09.710 17:52:26 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:09.710 17:52:26 -- json_config/json_config.sh@27 -- # exit 0 00:06:09.710 00:06:09.710 real 0m0.216s 00:06:09.710 user 0m0.126s 00:06:09.710 sys 0m0.099s 00:06:09.710 17:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.710 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.710 ************************************ 00:06:09.710 END TEST json_config 00:06:09.710 ************************************ 00:06:09.710 17:52:26 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.710 17:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.710 17:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.710 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.710 ************************************ 00:06:09.710 START TEST json_config_extra_key 00:06:09.710 ************************************ 00:06:09.710 17:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.710 17:52:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.710 17:52:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.710 17:52:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.970 17:52:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.970 17:52:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.970 17:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.970 17:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.970 17:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.970 17:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.970 17:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.970 17:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.970 17:52:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.970 17:52:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.970 17:52:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.970 17:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.970 17:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.970 17:52:26 -- scripts/common.sh@344 -- # : 1 00:06:09.970 17:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.970 17:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.970 17:52:26 -- scripts/common.sh@364 -- # decimal 1 00:06:09.970 17:52:26 -- scripts/common.sh@352 -- # local d=1 00:06:09.970 17:52:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.970 17:52:26 -- scripts/common.sh@354 -- # echo 1 00:06:09.970 17:52:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.970 17:52:26 -- scripts/common.sh@365 -- # decimal 2 00:06:09.970 17:52:26 -- scripts/common.sh@352 -- # local d=2 00:06:09.970 17:52:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.970 17:52:26 -- scripts/common.sh@354 -- # echo 2 00:06:09.970 17:52:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.970 17:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.970 17:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.970 17:52:26 -- scripts/common.sh@367 -- # return 0 00:06:09.970 17:52:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.970 17:52:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.970 --rc genhtml_branch_coverage=1 00:06:09.970 --rc genhtml_function_coverage=1 00:06:09.970 --rc genhtml_legend=1 00:06:09.970 --rc geninfo_all_blocks=1 00:06:09.970 --rc geninfo_unexecuted_blocks=1 00:06:09.970 00:06:09.970 ' 00:06:09.970 17:52:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.970 --rc genhtml_branch_coverage=1 00:06:09.970 --rc genhtml_function_coverage=1 00:06:09.970 --rc genhtml_legend=1 00:06:09.970 --rc geninfo_all_blocks=1 00:06:09.970 --rc geninfo_unexecuted_blocks=1 00:06:09.970 00:06:09.970 ' 00:06:09.970 17:52:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.970 --rc genhtml_branch_coverage=1 00:06:09.970 --rc genhtml_function_coverage=1 00:06:09.970 --rc genhtml_legend=1 00:06:09.970 --rc geninfo_all_blocks=1 00:06:09.970 --rc geninfo_unexecuted_blocks=1 00:06:09.970 00:06:09.970 ' 00:06:09.970 17:52:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.970 --rc genhtml_branch_coverage=1 00:06:09.970 --rc genhtml_function_coverage=1 00:06:09.970 --rc genhtml_legend=1 00:06:09.970 --rc geninfo_all_blocks=1 00:06:09.970 --rc geninfo_unexecuted_blocks=1 00:06:09.970 00:06:09.970 ' 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.970 17:52:26 -- nvmf/common.sh@7 -- # uname -s 00:06:09.970 17:52:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.970 17:52:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.970 17:52:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.970 17:52:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.970 17:52:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.970 17:52:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.970 17:52:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.970 17:52:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.970 17:52:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.970 17:52:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.970 17:52:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:06:09.970 17:52:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=94f28f6d-9fc6-42c2-a7f8-6374e828f088 00:06:09.970 17:52:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.970 17:52:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.970 17:52:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.970 17:52:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.970 17:52:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.970 17:52:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.970 17:52:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.970 17:52:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.970 17:52:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.970 17:52:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.970 17:52:26 -- paths/export.sh@5 -- # export PATH 00:06:09.970 17:52:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.970 17:52:26 -- nvmf/common.sh@46 -- # : 0 00:06:09.970 17:52:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:09.970 17:52:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:09.970 17:52:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:09.970 17:52:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.970 17:52:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.970 17:52:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:09.970 17:52:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:09.970 17:52:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:09.970 17:52:26 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:09.971 INFO: launching applications... 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68965 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:09.971 Waiting for target to run... 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68965 /var/tmp/spdk_tgt.sock 00:06:09.971 17:52:26 -- common/autotest_common.sh@829 -- # '[' -z 68965 ']' 00:06:09.971 17:52:26 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:09.971 17:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.971 17:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.971 17:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.971 17:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.971 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.971 [2024-11-26 17:52:26.820192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.971 [2024-11-26 17:52:26.820339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68965 ] 00:06:10.539 [2024-11-26 17:52:27.188995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.539 [2024-11-26 17:52:27.214237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.539 [2024-11-26 17:52:27.214419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.798 17:52:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.798 17:52:27 -- common/autotest_common.sh@862 -- # return 0 00:06:10.798 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:10.798 INFO: shutting down applications... 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68965 ]] 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68965 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68965 00:06:10.798 17:52:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68965 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:11.366 SPDK target shutdown done 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:11.366 17:52:28 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:11.366 Success 00:06:11.366 00:06:11.366 real 0m1.618s 00:06:11.366 user 0m1.306s 00:06:11.366 sys 0m0.498s 00:06:11.366 17:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.366 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:11.366 ************************************ 00:06:11.366 END TEST json_config_extra_key 00:06:11.366 ************************************ 00:06:11.366 17:52:28 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.366 17:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.366 17:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.366 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:11.366 ************************************ 00:06:11.366 START TEST alias_rpc 00:06:11.366 ************************************ 00:06:11.366 17:52:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.626 * Looking for test storage... 00:06:11.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:11.626 17:52:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.626 17:52:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.626 17:52:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.626 17:52:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.626 17:52:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.626 17:52:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.626 17:52:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.626 17:52:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.626 17:52:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.626 17:52:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.626 17:52:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.626 17:52:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.626 17:52:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.626 17:52:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.626 17:52:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.626 17:52:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.626 17:52:28 -- scripts/common.sh@344 -- # : 1 00:06:11.626 17:52:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.626 17:52:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.626 17:52:28 -- scripts/common.sh@364 -- # decimal 1 00:06:11.626 17:52:28 -- scripts/common.sh@352 -- # local d=1 00:06:11.626 17:52:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.626 17:52:28 -- scripts/common.sh@354 -- # echo 1 00:06:11.626 17:52:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.626 17:52:28 -- scripts/common.sh@365 -- # decimal 2 00:06:11.626 17:52:28 -- scripts/common.sh@352 -- # local d=2 00:06:11.626 17:52:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.626 17:52:28 -- scripts/common.sh@354 -- # echo 2 00:06:11.626 17:52:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.626 17:52:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.626 17:52:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.626 17:52:28 -- scripts/common.sh@367 -- # return 0 00:06:11.626 17:52:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.626 17:52:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.626 --rc genhtml_branch_coverage=1 00:06:11.626 --rc genhtml_function_coverage=1 00:06:11.626 --rc genhtml_legend=1 00:06:11.626 --rc geninfo_all_blocks=1 00:06:11.626 --rc geninfo_unexecuted_blocks=1 00:06:11.626 00:06:11.626 ' 00:06:11.626 17:52:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.626 --rc genhtml_branch_coverage=1 00:06:11.626 --rc genhtml_function_coverage=1 00:06:11.626 --rc genhtml_legend=1 00:06:11.626 --rc geninfo_all_blocks=1 00:06:11.626 --rc geninfo_unexecuted_blocks=1 00:06:11.626 00:06:11.626 ' 00:06:11.626 17:52:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.626 --rc genhtml_branch_coverage=1 00:06:11.626 --rc genhtml_function_coverage=1 00:06:11.626 --rc genhtml_legend=1 00:06:11.626 --rc geninfo_all_blocks=1 00:06:11.626 --rc geninfo_unexecuted_blocks=1 00:06:11.626 00:06:11.626 ' 00:06:11.626 17:52:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.626 --rc genhtml_branch_coverage=1 00:06:11.626 --rc genhtml_function_coverage=1 00:06:11.626 --rc genhtml_legend=1 00:06:11.626 --rc geninfo_all_blocks=1 00:06:11.626 --rc geninfo_unexecuted_blocks=1 00:06:11.626 00:06:11.626 ' 00:06:11.626 17:52:28 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.626 17:52:28 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69032 00:06:11.626 17:52:28 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.626 17:52:28 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69032 00:06:11.626 17:52:28 -- common/autotest_common.sh@829 -- # '[' -z 69032 ']' 00:06:11.626 17:52:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.626 17:52:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.626 17:52:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.626 17:52:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.626 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:11.626 [2024-11-26 17:52:28.511630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.626 [2024-11-26 17:52:28.511764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69032 ] 00:06:11.885 [2024-11-26 17:52:28.665851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.885 [2024-11-26 17:52:28.708372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.885 [2024-11-26 17:52:28.708600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.455 17:52:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.455 17:52:29 -- common/autotest_common.sh@862 -- # return 0 00:06:12.455 17:52:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:12.717 17:52:29 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69032 00:06:12.717 17:52:29 -- common/autotest_common.sh@936 -- # '[' -z 69032 ']' 00:06:12.717 17:52:29 -- common/autotest_common.sh@940 -- # kill -0 69032 00:06:12.717 17:52:29 -- common/autotest_common.sh@941 -- # uname 00:06:12.717 17:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.717 17:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69032 00:06:12.717 17:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.717 17:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.717 killing process with pid 69032 00:06:12.717 17:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69032' 00:06:12.717 17:52:29 -- common/autotest_common.sh@955 -- # kill 69032 00:06:12.717 17:52:29 -- common/autotest_common.sh@960 -- # wait 69032 00:06:13.286 00:06:13.286 real 0m1.760s 00:06:13.286 user 0m1.757s 00:06:13.286 sys 0m0.526s 00:06:13.286 17:52:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.286 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.286 ************************************ 00:06:13.286 END TEST alias_rpc 00:06:13.286 ************************************ 00:06:13.286 17:52:30 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:13.286 17:52:30 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.286 17:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.286 17:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.286 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.286 ************************************ 00:06:13.286 START TEST spdkcli_tcp 00:06:13.286 ************************************ 00:06:13.286 17:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.286 * Looking for test storage... 00:06:13.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:13.286 17:52:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.286 17:52:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.286 17:52:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.286 17:52:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.286 17:52:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.286 17:52:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.286 17:52:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.286 17:52:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.286 17:52:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.286 17:52:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.286 17:52:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.286 17:52:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.286 17:52:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.286 17:52:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.286 17:52:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.286 17:52:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.286 17:52:30 -- scripts/common.sh@344 -- # : 1 00:06:13.286 17:52:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.286 17:52:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.545 17:52:30 -- scripts/common.sh@364 -- # decimal 1 00:06:13.545 17:52:30 -- scripts/common.sh@352 -- # local d=1 00:06:13.545 17:52:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.545 17:52:30 -- scripts/common.sh@354 -- # echo 1 00:06:13.545 17:52:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.545 17:52:30 -- scripts/common.sh@365 -- # decimal 2 00:06:13.545 17:52:30 -- scripts/common.sh@352 -- # local d=2 00:06:13.545 17:52:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.545 17:52:30 -- scripts/common.sh@354 -- # echo 2 00:06:13.545 17:52:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.545 17:52:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.545 17:52:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.545 17:52:30 -- scripts/common.sh@367 -- # return 0 00:06:13.545 17:52:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.545 17:52:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.545 --rc genhtml_branch_coverage=1 00:06:13.545 --rc genhtml_function_coverage=1 00:06:13.545 --rc genhtml_legend=1 00:06:13.545 --rc geninfo_all_blocks=1 00:06:13.545 --rc geninfo_unexecuted_blocks=1 00:06:13.545 00:06:13.545 ' 00:06:13.545 17:52:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.545 --rc genhtml_branch_coverage=1 00:06:13.545 --rc genhtml_function_coverage=1 00:06:13.545 --rc genhtml_legend=1 00:06:13.545 --rc geninfo_all_blocks=1 00:06:13.545 --rc geninfo_unexecuted_blocks=1 00:06:13.545 00:06:13.545 ' 00:06:13.545 17:52:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.545 --rc genhtml_branch_coverage=1 00:06:13.545 --rc genhtml_function_coverage=1 00:06:13.545 --rc genhtml_legend=1 00:06:13.545 --rc geninfo_all_blocks=1 00:06:13.545 --rc geninfo_unexecuted_blocks=1 00:06:13.545 00:06:13.545 ' 00:06:13.545 17:52:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.545 --rc genhtml_branch_coverage=1 00:06:13.545 --rc genhtml_function_coverage=1 00:06:13.545 --rc genhtml_legend=1 00:06:13.545 --rc geninfo_all_blocks=1 00:06:13.545 --rc geninfo_unexecuted_blocks=1 00:06:13.545 00:06:13.545 ' 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:13.545 17:52:30 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:13.545 17:52:30 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.545 17:52:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.545 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69116 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.545 17:52:30 -- spdkcli/tcp.sh@27 -- # waitforlisten 69116 00:06:13.545 17:52:30 -- common/autotest_common.sh@829 -- # '[' -z 69116 ']' 00:06:13.545 17:52:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.545 17:52:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.545 17:52:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.545 17:52:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.545 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.545 [2024-11-26 17:52:30.331760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:13.545 [2024-11-26 17:52:30.331883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69116 ] 00:06:13.804 [2024-11-26 17:52:30.483878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.804 [2024-11-26 17:52:30.531118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.805 [2024-11-26 17:52:30.531737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.805 [2024-11-26 17:52:30.531847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.372 17:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.372 17:52:31 -- common/autotest_common.sh@862 -- # return 0 00:06:14.372 17:52:31 -- spdkcli/tcp.sh@31 -- # socat_pid=69133 00:06:14.372 17:52:31 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.372 17:52:31 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.631 [ 00:06:14.631 "bdev_malloc_delete", 00:06:14.631 "bdev_malloc_create", 00:06:14.631 "bdev_null_resize", 00:06:14.631 "bdev_null_delete", 00:06:14.631 "bdev_null_create", 00:06:14.631 "bdev_nvme_cuse_unregister", 00:06:14.631 "bdev_nvme_cuse_register", 00:06:14.631 "bdev_opal_new_user", 00:06:14.631 "bdev_opal_set_lock_state", 00:06:14.631 "bdev_opal_delete", 00:06:14.631 "bdev_opal_get_info", 00:06:14.631 "bdev_opal_create", 00:06:14.631 "bdev_nvme_opal_revert", 00:06:14.631 "bdev_nvme_opal_init", 00:06:14.631 "bdev_nvme_send_cmd", 00:06:14.631 "bdev_nvme_get_path_iostat", 00:06:14.631 "bdev_nvme_get_mdns_discovery_info", 00:06:14.631 "bdev_nvme_stop_mdns_discovery", 00:06:14.631 "bdev_nvme_start_mdns_discovery", 00:06:14.631 "bdev_nvme_set_multipath_policy", 00:06:14.631 "bdev_nvme_set_preferred_path", 00:06:14.631 "bdev_nvme_get_io_paths", 00:06:14.631 "bdev_nvme_remove_error_injection", 00:06:14.631 "bdev_nvme_add_error_injection", 00:06:14.631 "bdev_nvme_get_discovery_info", 00:06:14.631 "bdev_nvme_stop_discovery", 00:06:14.631 "bdev_nvme_start_discovery", 00:06:14.631 "bdev_nvme_get_controller_health_info", 00:06:14.631 "bdev_nvme_disable_controller", 00:06:14.631 "bdev_nvme_enable_controller", 00:06:14.631 "bdev_nvme_reset_controller", 00:06:14.631 "bdev_nvme_get_transport_statistics", 00:06:14.631 "bdev_nvme_apply_firmware", 00:06:14.631 "bdev_nvme_detach_controller", 00:06:14.631 "bdev_nvme_get_controllers", 00:06:14.631 "bdev_nvme_attach_controller", 00:06:14.631 "bdev_nvme_set_hotplug", 00:06:14.631 "bdev_nvme_set_options", 00:06:14.631 "bdev_passthru_delete", 00:06:14.631 "bdev_passthru_create", 00:06:14.631 "bdev_lvol_grow_lvstore", 00:06:14.631 "bdev_lvol_get_lvols", 00:06:14.631 "bdev_lvol_get_lvstores", 00:06:14.631 "bdev_lvol_delete", 00:06:14.631 "bdev_lvol_set_read_only", 00:06:14.631 "bdev_lvol_resize", 00:06:14.631 "bdev_lvol_decouple_parent", 00:06:14.631 "bdev_lvol_inflate", 00:06:14.631 "bdev_lvol_rename", 00:06:14.631 "bdev_lvol_clone_bdev", 00:06:14.631 "bdev_lvol_clone", 00:06:14.631 "bdev_lvol_snapshot", 00:06:14.631 "bdev_lvol_create", 00:06:14.631 "bdev_lvol_delete_lvstore", 00:06:14.631 "bdev_lvol_rename_lvstore", 00:06:14.631 "bdev_lvol_create_lvstore", 00:06:14.631 "bdev_raid_set_options", 00:06:14.631 "bdev_raid_remove_base_bdev", 00:06:14.631 "bdev_raid_add_base_bdev", 00:06:14.631 "bdev_raid_delete", 00:06:14.631 "bdev_raid_create", 00:06:14.631 "bdev_raid_get_bdevs", 00:06:14.631 "bdev_error_inject_error", 00:06:14.631 "bdev_error_delete", 00:06:14.631 "bdev_error_create", 00:06:14.631 "bdev_split_delete", 00:06:14.631 "bdev_split_create", 00:06:14.631 "bdev_delay_delete", 00:06:14.631 "bdev_delay_create", 00:06:14.631 "bdev_delay_update_latency", 00:06:14.631 "bdev_zone_block_delete", 00:06:14.631 "bdev_zone_block_create", 00:06:14.631 "blobfs_create", 00:06:14.631 "blobfs_detect", 00:06:14.631 "blobfs_set_cache_size", 00:06:14.631 "bdev_xnvme_delete", 00:06:14.631 "bdev_xnvme_create", 00:06:14.631 "bdev_aio_delete", 00:06:14.631 "bdev_aio_rescan", 00:06:14.631 "bdev_aio_create", 00:06:14.631 "bdev_ftl_set_property", 00:06:14.631 "bdev_ftl_get_properties", 00:06:14.631 "bdev_ftl_get_stats", 00:06:14.631 "bdev_ftl_unmap", 00:06:14.631 "bdev_ftl_unload", 00:06:14.631 "bdev_ftl_delete", 00:06:14.631 "bdev_ftl_load", 00:06:14.631 "bdev_ftl_create", 00:06:14.632 "bdev_virtio_attach_controller", 00:06:14.632 "bdev_virtio_scsi_get_devices", 00:06:14.632 "bdev_virtio_detach_controller", 00:06:14.632 "bdev_virtio_blk_set_hotplug", 00:06:14.632 "bdev_iscsi_delete", 00:06:14.632 "bdev_iscsi_create", 00:06:14.632 "bdev_iscsi_set_options", 00:06:14.632 "accel_error_inject_error", 00:06:14.632 "ioat_scan_accel_module", 00:06:14.632 "dsa_scan_accel_module", 00:06:14.632 "iaa_scan_accel_module", 00:06:14.632 "iscsi_set_options", 00:06:14.632 "iscsi_get_auth_groups", 00:06:14.632 "iscsi_auth_group_remove_secret", 00:06:14.632 "iscsi_auth_group_add_secret", 00:06:14.632 "iscsi_delete_auth_group", 00:06:14.632 "iscsi_create_auth_group", 00:06:14.632 "iscsi_set_discovery_auth", 00:06:14.632 "iscsi_get_options", 00:06:14.632 "iscsi_target_node_request_logout", 00:06:14.632 "iscsi_target_node_set_redirect", 00:06:14.632 "iscsi_target_node_set_auth", 00:06:14.632 "iscsi_target_node_add_lun", 00:06:14.632 "iscsi_get_connections", 00:06:14.632 "iscsi_portal_group_set_auth", 00:06:14.632 "iscsi_start_portal_group", 00:06:14.632 "iscsi_delete_portal_group", 00:06:14.632 "iscsi_create_portal_group", 00:06:14.632 "iscsi_get_portal_groups", 00:06:14.632 "iscsi_delete_target_node", 00:06:14.632 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.632 "iscsi_target_node_add_pg_ig_maps", 00:06:14.632 "iscsi_create_target_node", 00:06:14.632 "iscsi_get_target_nodes", 00:06:14.632 "iscsi_delete_initiator_group", 00:06:14.632 "iscsi_initiator_group_remove_initiators", 00:06:14.632 "iscsi_initiator_group_add_initiators", 00:06:14.632 "iscsi_create_initiator_group", 00:06:14.632 "iscsi_get_initiator_groups", 00:06:14.632 "nvmf_set_crdt", 00:06:14.632 "nvmf_set_config", 00:06:14.632 "nvmf_set_max_subsystems", 00:06:14.632 "nvmf_subsystem_get_listeners", 00:06:14.632 "nvmf_subsystem_get_qpairs", 00:06:14.632 "nvmf_subsystem_get_controllers", 00:06:14.632 "nvmf_get_stats", 00:06:14.632 "nvmf_get_transports", 00:06:14.632 "nvmf_create_transport", 00:06:14.632 "nvmf_get_targets", 00:06:14.632 "nvmf_delete_target", 00:06:14.632 "nvmf_create_target", 00:06:14.632 "nvmf_subsystem_allow_any_host", 00:06:14.632 "nvmf_subsystem_remove_host", 00:06:14.632 "nvmf_subsystem_add_host", 00:06:14.632 "nvmf_subsystem_remove_ns", 00:06:14.632 "nvmf_subsystem_add_ns", 00:06:14.632 "nvmf_subsystem_listener_set_ana_state", 00:06:14.632 "nvmf_discovery_get_referrals", 00:06:14.632 "nvmf_discovery_remove_referral", 00:06:14.632 "nvmf_discovery_add_referral", 00:06:14.632 "nvmf_subsystem_remove_listener", 00:06:14.632 "nvmf_subsystem_add_listener", 00:06:14.632 "nvmf_delete_subsystem", 00:06:14.632 "nvmf_create_subsystem", 00:06:14.632 "nvmf_get_subsystems", 00:06:14.632 "env_dpdk_get_mem_stats", 00:06:14.632 "nbd_get_disks", 00:06:14.632 "nbd_stop_disk", 00:06:14.632 "nbd_start_disk", 00:06:14.632 "ublk_recover_disk", 00:06:14.632 "ublk_get_disks", 00:06:14.632 "ublk_stop_disk", 00:06:14.632 "ublk_start_disk", 00:06:14.632 "ublk_destroy_target", 00:06:14.632 "ublk_create_target", 00:06:14.632 "virtio_blk_create_transport", 00:06:14.632 "virtio_blk_get_transports", 00:06:14.632 "vhost_controller_set_coalescing", 00:06:14.632 "vhost_get_controllers", 00:06:14.632 "vhost_delete_controller", 00:06:14.632 "vhost_create_blk_controller", 00:06:14.632 "vhost_scsi_controller_remove_target", 00:06:14.632 "vhost_scsi_controller_add_target", 00:06:14.632 "vhost_start_scsi_controller", 00:06:14.632 "vhost_create_scsi_controller", 00:06:14.632 "thread_set_cpumask", 00:06:14.632 "framework_get_scheduler", 00:06:14.632 "framework_set_scheduler", 00:06:14.632 "framework_get_reactors", 00:06:14.632 "thread_get_io_channels", 00:06:14.632 "thread_get_pollers", 00:06:14.632 "thread_get_stats", 00:06:14.632 "framework_monitor_context_switch", 00:06:14.632 "spdk_kill_instance", 00:06:14.632 "log_enable_timestamps", 00:06:14.632 "log_get_flags", 00:06:14.632 "log_clear_flag", 00:06:14.632 "log_set_flag", 00:06:14.632 "log_get_level", 00:06:14.632 "log_set_level", 00:06:14.632 "log_get_print_level", 00:06:14.632 "log_set_print_level", 00:06:14.632 "framework_enable_cpumask_locks", 00:06:14.632 "framework_disable_cpumask_locks", 00:06:14.632 "framework_wait_init", 00:06:14.632 "framework_start_init", 00:06:14.632 "scsi_get_devices", 00:06:14.632 "bdev_get_histogram", 00:06:14.632 "bdev_enable_histogram", 00:06:14.632 "bdev_set_qos_limit", 00:06:14.632 "bdev_set_qd_sampling_period", 00:06:14.632 "bdev_get_bdevs", 00:06:14.632 "bdev_reset_iostat", 00:06:14.632 "bdev_get_iostat", 00:06:14.632 "bdev_examine", 00:06:14.632 "bdev_wait_for_examine", 00:06:14.632 "bdev_set_options", 00:06:14.632 "notify_get_notifications", 00:06:14.632 "notify_get_types", 00:06:14.632 "accel_get_stats", 00:06:14.632 "accel_set_options", 00:06:14.632 "accel_set_driver", 00:06:14.632 "accel_crypto_key_destroy", 00:06:14.632 "accel_crypto_keys_get", 00:06:14.632 "accel_crypto_key_create", 00:06:14.632 "accel_assign_opc", 00:06:14.632 "accel_get_module_info", 00:06:14.632 "accel_get_opc_assignments", 00:06:14.632 "vmd_rescan", 00:06:14.632 "vmd_remove_device", 00:06:14.632 "vmd_enable", 00:06:14.632 "sock_set_default_impl", 00:06:14.632 "sock_impl_set_options", 00:06:14.632 "sock_impl_get_options", 00:06:14.632 "iobuf_get_stats", 00:06:14.632 "iobuf_set_options", 00:06:14.632 "framework_get_pci_devices", 00:06:14.632 "framework_get_config", 00:06:14.632 "framework_get_subsystems", 00:06:14.632 "trace_get_info", 00:06:14.632 "trace_get_tpoint_group_mask", 00:06:14.632 "trace_disable_tpoint_group", 00:06:14.632 "trace_enable_tpoint_group", 00:06:14.632 "trace_clear_tpoint_mask", 00:06:14.632 "trace_set_tpoint_mask", 00:06:14.632 "spdk_get_version", 00:06:14.632 "rpc_get_methods" 00:06:14.632 ] 00:06:14.632 17:52:31 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.632 17:52:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.632 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:14.632 17:52:31 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.632 17:52:31 -- spdkcli/tcp.sh@38 -- # killprocess 69116 00:06:14.632 17:52:31 -- common/autotest_common.sh@936 -- # '[' -z 69116 ']' 00:06:14.632 17:52:31 -- common/autotest_common.sh@940 -- # kill -0 69116 00:06:14.632 17:52:31 -- common/autotest_common.sh@941 -- # uname 00:06:14.632 17:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.632 17:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69116 00:06:14.632 17:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.632 17:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.632 17:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69116' 00:06:14.632 killing process with pid 69116 00:06:14.632 17:52:31 -- common/autotest_common.sh@955 -- # kill 69116 00:06:14.632 17:52:31 -- common/autotest_common.sh@960 -- # wait 69116 00:06:15.200 00:06:15.200 real 0m1.797s 00:06:15.200 user 0m2.967s 00:06:15.200 sys 0m0.581s 00:06:15.200 17:52:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.200 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.200 ************************************ 00:06:15.200 END TEST spdkcli_tcp 00:06:15.200 ************************************ 00:06:15.200 17:52:31 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.200 17:52:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.200 17:52:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.201 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.201 ************************************ 00:06:15.201 START TEST dpdk_mem_utility 00:06:15.201 ************************************ 00:06:15.201 17:52:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.201 * Looking for test storage... 00:06:15.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.201 17:52:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.201 17:52:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.201 17:52:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.201 17:52:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.201 17:52:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.201 17:52:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.201 17:52:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.201 17:52:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.201 17:52:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.201 17:52:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.201 17:52:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.201 17:52:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.201 17:52:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.201 17:52:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.201 17:52:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.201 17:52:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.201 17:52:32 -- scripts/common.sh@344 -- # : 1 00:06:15.201 17:52:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.201 17:52:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.201 17:52:32 -- scripts/common.sh@364 -- # decimal 1 00:06:15.201 17:52:32 -- scripts/common.sh@352 -- # local d=1 00:06:15.201 17:52:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.201 17:52:32 -- scripts/common.sh@354 -- # echo 1 00:06:15.201 17:52:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.201 17:52:32 -- scripts/common.sh@365 -- # decimal 2 00:06:15.201 17:52:32 -- scripts/common.sh@352 -- # local d=2 00:06:15.201 17:52:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.201 17:52:32 -- scripts/common.sh@354 -- # echo 2 00:06:15.201 17:52:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.201 17:52:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.201 17:52:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.201 17:52:32 -- scripts/common.sh@367 -- # return 0 00:06:15.201 17:52:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.201 17:52:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.201 --rc genhtml_branch_coverage=1 00:06:15.201 --rc genhtml_function_coverage=1 00:06:15.201 --rc genhtml_legend=1 00:06:15.201 --rc geninfo_all_blocks=1 00:06:15.201 --rc geninfo_unexecuted_blocks=1 00:06:15.201 00:06:15.201 ' 00:06:15.201 17:52:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.201 --rc genhtml_branch_coverage=1 00:06:15.201 --rc genhtml_function_coverage=1 00:06:15.201 --rc genhtml_legend=1 00:06:15.201 --rc geninfo_all_blocks=1 00:06:15.201 --rc geninfo_unexecuted_blocks=1 00:06:15.201 00:06:15.201 ' 00:06:15.201 17:52:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.201 --rc genhtml_branch_coverage=1 00:06:15.201 --rc genhtml_function_coverage=1 00:06:15.201 --rc genhtml_legend=1 00:06:15.201 --rc geninfo_all_blocks=1 00:06:15.201 --rc geninfo_unexecuted_blocks=1 00:06:15.201 00:06:15.201 ' 00:06:15.201 17:52:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.201 --rc genhtml_branch_coverage=1 00:06:15.201 --rc genhtml_function_coverage=1 00:06:15.201 --rc genhtml_legend=1 00:06:15.201 --rc geninfo_all_blocks=1 00:06:15.201 --rc geninfo_unexecuted_blocks=1 00:06:15.201 00:06:15.201 ' 00:06:15.201 17:52:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.201 17:52:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69215 00:06:15.201 17:52:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.201 17:52:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69215 00:06:15.201 17:52:32 -- common/autotest_common.sh@829 -- # '[' -z 69215 ']' 00:06:15.201 17:52:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.201 17:52:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.201 17:52:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.201 17:52:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.201 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:15.460 [2024-11-26 17:52:32.206245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.460 [2024-11-26 17:52:32.206809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69215 ] 00:06:15.460 [2024-11-26 17:52:32.357753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.718 [2024-11-26 17:52:32.400251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.718 [2024-11-26 17:52:32.400648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.291 17:52:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.291 17:52:33 -- common/autotest_common.sh@862 -- # return 0 00:06:16.291 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.291 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.291 17:52:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.291 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.291 { 00:06:16.291 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.291 } 00:06:16.291 17:52:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.292 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.292 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:16.292 1 heaps totaling size 814.000000 MiB 00:06:16.292 size: 814.000000 MiB heap id: 0 00:06:16.292 end heaps---------- 00:06:16.292 8 mempools totaling size 598.116089 MiB 00:06:16.292 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.292 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.292 size: 84.521057 MiB name: bdev_io_69215 00:06:16.292 size: 51.011292 MiB name: evtpool_69215 00:06:16.292 size: 50.003479 MiB name: msgpool_69215 00:06:16.292 size: 21.763794 MiB name: PDU_Pool 00:06:16.292 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.292 size: 0.026123 MiB name: Session_Pool 00:06:16.292 end mempools------- 00:06:16.292 6 memzones totaling size 4.142822 MiB 00:06:16.292 size: 1.000366 MiB name: RG_ring_0_69215 00:06:16.292 size: 1.000366 MiB name: RG_ring_1_69215 00:06:16.292 size: 1.000366 MiB name: RG_ring_4_69215 00:06:16.292 size: 1.000366 MiB name: RG_ring_5_69215 00:06:16.292 size: 0.125366 MiB name: RG_ring_2_69215 00:06:16.292 size: 0.015991 MiB name: RG_ring_3_69215 00:06:16.292 end memzones------- 00:06:16.292 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.292 heap id: 0 total size: 814.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:16.292 list of free elements. size: 12.470093 MiB 00:06:16.292 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.292 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.292 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.292 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.292 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.292 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.292 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.292 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.292 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:16.292 element at address: 0x20001aa00000 with size: 0.567871 MiB 00:06:16.292 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:16.292 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:16.292 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.292 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:16.292 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:16.292 list of standard malloc elements. size: 199.267334 MiB 00:06:16.292 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.292 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.292 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.292 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.292 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.292 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.292 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.292 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.292 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.292 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:16.292 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.293 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:16.293 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.294 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.294 list of memzone associated elements. size: 602.262573 MiB 00:06:16.294 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.294 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.294 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.294 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.294 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.294 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_69215_0 00:06:16.294 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.294 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69215_0 00:06:16.294 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.294 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69215_0 00:06:16.294 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.294 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.294 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.294 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.294 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.294 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69215 00:06:16.294 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.294 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69215 00:06:16.294 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.294 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69215 00:06:16.294 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.294 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.294 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.294 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.294 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.294 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.294 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69215 00:06:16.294 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.294 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69215 00:06:16.294 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.294 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69215 00:06:16.294 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.294 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69215 00:06:16.294 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.294 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69215 00:06:16.294 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.294 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.294 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.294 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.294 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.294 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.294 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.294 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69215 00:06:16.294 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.294 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.294 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:16.294 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.294 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.294 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69215 00:06:16.294 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:16.294 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.294 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:16.294 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69215 00:06:16.294 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.294 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69215 00:06:16.294 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:16.294 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.294 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.294 17:52:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69215 00:06:16.294 17:52:33 -- common/autotest_common.sh@936 -- # '[' -z 69215 ']' 00:06:16.294 17:52:33 -- common/autotest_common.sh@940 -- # kill -0 69215 00:06:16.294 17:52:33 -- common/autotest_common.sh@941 -- # uname 00:06:16.295 17:52:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.295 17:52:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69215 00:06:16.295 17:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.295 killing process with pid 69215 00:06:16.295 17:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.295 17:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69215' 00:06:16.295 17:52:33 -- common/autotest_common.sh@955 -- # kill 69215 00:06:16.295 17:52:33 -- common/autotest_common.sh@960 -- # wait 69215 00:06:16.916 00:06:16.916 real 0m1.710s 00:06:16.916 user 0m1.664s 00:06:16.916 sys 0m0.519s 00:06:16.916 17:52:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.916 ************************************ 00:06:16.916 END TEST dpdk_mem_utility 00:06:16.916 ************************************ 00:06:16.916 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.916 17:52:33 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.916 17:52:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.916 17:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.916 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.916 ************************************ 00:06:16.916 START TEST event 00:06:16.916 ************************************ 00:06:16.916 17:52:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.916 * Looking for test storage... 00:06:16.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.916 17:52:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:16.916 17:52:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:16.916 17:52:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.174 17:52:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.174 17:52:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.174 17:52:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.174 17:52:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.174 17:52:33 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.174 17:52:33 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.174 17:52:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.174 17:52:33 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.174 17:52:33 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.174 17:52:33 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.174 17:52:33 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.174 17:52:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.174 17:52:33 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.174 17:52:33 -- scripts/common.sh@344 -- # : 1 00:06:17.174 17:52:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.174 17:52:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.174 17:52:33 -- scripts/common.sh@364 -- # decimal 1 00:06:17.174 17:52:33 -- scripts/common.sh@352 -- # local d=1 00:06:17.174 17:52:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.174 17:52:33 -- scripts/common.sh@354 -- # echo 1 00:06:17.174 17:52:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.174 17:52:33 -- scripts/common.sh@365 -- # decimal 2 00:06:17.174 17:52:33 -- scripts/common.sh@352 -- # local d=2 00:06:17.174 17:52:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.174 17:52:33 -- scripts/common.sh@354 -- # echo 2 00:06:17.175 17:52:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.175 17:52:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.175 17:52:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.175 17:52:33 -- scripts/common.sh@367 -- # return 0 00:06:17.175 17:52:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.175 17:52:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.175 --rc genhtml_branch_coverage=1 00:06:17.175 --rc genhtml_function_coverage=1 00:06:17.175 --rc genhtml_legend=1 00:06:17.175 --rc geninfo_all_blocks=1 00:06:17.175 --rc geninfo_unexecuted_blocks=1 00:06:17.175 00:06:17.175 ' 00:06:17.175 17:52:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.175 --rc genhtml_branch_coverage=1 00:06:17.175 --rc genhtml_function_coverage=1 00:06:17.175 --rc genhtml_legend=1 00:06:17.175 --rc geninfo_all_blocks=1 00:06:17.175 --rc geninfo_unexecuted_blocks=1 00:06:17.175 00:06:17.175 ' 00:06:17.175 17:52:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.175 --rc genhtml_branch_coverage=1 00:06:17.175 --rc genhtml_function_coverage=1 00:06:17.175 --rc genhtml_legend=1 00:06:17.175 --rc geninfo_all_blocks=1 00:06:17.175 --rc geninfo_unexecuted_blocks=1 00:06:17.175 00:06:17.175 ' 00:06:17.175 17:52:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.175 --rc genhtml_branch_coverage=1 00:06:17.175 --rc genhtml_function_coverage=1 00:06:17.175 --rc genhtml_legend=1 00:06:17.175 --rc geninfo_all_blocks=1 00:06:17.175 --rc geninfo_unexecuted_blocks=1 00:06:17.175 00:06:17.175 ' 00:06:17.175 17:52:33 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.175 17:52:33 -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.175 17:52:33 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.175 17:52:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.175 17:52:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.175 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.175 ************************************ 00:06:17.175 START TEST event_perf 00:06:17.175 ************************************ 00:06:17.175 17:52:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.175 Running I/O for 1 seconds...[2024-11-26 17:52:33.939329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.175 [2024-11-26 17:52:33.939604] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69300 ] 00:06:17.175 [2024-11-26 17:52:34.088491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.433 [2024-11-26 17:52:34.134745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.433 [2024-11-26 17:52:34.134975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.433 [2024-11-26 17:52:34.135959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.433 Running I/O for 1 seconds...[2024-11-26 17:52:34.136669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.370 00:06:18.370 lcore 0: 187549 00:06:18.370 lcore 1: 187548 00:06:18.370 lcore 2: 187549 00:06:18.370 lcore 3: 187550 00:06:18.370 done. 00:06:18.370 00:06:18.370 real 0m1.335s 00:06:18.370 user 0m4.094s 00:06:18.370 sys 0m0.123s 00:06:18.370 17:52:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.370 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:18.370 ************************************ 00:06:18.370 END TEST event_perf 00:06:18.370 ************************************ 00:06:18.629 17:52:35 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.629 17:52:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:18.629 17:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.629 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:18.629 ************************************ 00:06:18.629 START TEST event_reactor 00:06:18.629 ************************************ 00:06:18.629 17:52:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.629 [2024-11-26 17:52:35.358693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.629 [2024-11-26 17:52:35.358830] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69334 ] 00:06:18.629 [2024-11-26 17:52:35.499815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.629 [2024-11-26 17:52:35.542339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.009 test_start 00:06:20.009 oneshot 00:06:20.009 tick 100 00:06:20.009 tick 100 00:06:20.009 tick 250 00:06:20.009 tick 100 00:06:20.009 tick 100 00:06:20.009 tick 250 00:06:20.009 tick 500 00:06:20.009 tick 100 00:06:20.009 tick 100 00:06:20.009 tick 100 00:06:20.009 tick 250 00:06:20.009 tick 100 00:06:20.009 tick 100 00:06:20.009 test_end 00:06:20.009 00:06:20.009 real 0m1.320s 00:06:20.009 user 0m1.129s 00:06:20.009 sys 0m0.083s 00:06:20.009 17:52:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.009 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 ************************************ 00:06:20.009 END TEST event_reactor 00:06:20.009 ************************************ 00:06:20.009 17:52:36 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.009 17:52:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.009 17:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.009 17:52:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 ************************************ 00:06:20.009 START TEST event_reactor_perf 00:06:20.009 ************************************ 00:06:20.009 17:52:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.010 [2024-11-26 17:52:36.746582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.010 [2024-11-26 17:52:36.746695] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69365 ] 00:06:20.010 [2024-11-26 17:52:36.897049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.268 [2024-11-26 17:52:36.943330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.211 test_start 00:06:21.211 test_end 00:06:21.211 Performance: 369321 events per second 00:06:21.211 00:06:21.211 real 0m1.332s 00:06:21.211 user 0m1.128s 00:06:21.211 sys 0m0.095s 00:06:21.211 17:52:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.211 ************************************ 00:06:21.211 END TEST event_reactor_perf 00:06:21.211 ************************************ 00:06:21.211 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:21.211 17:52:38 -- event/event.sh@49 -- # uname -s 00:06:21.211 17:52:38 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.211 17:52:38 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.211 17:52:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.211 17:52:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.211 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:21.211 ************************************ 00:06:21.211 START TEST event_scheduler 00:06:21.211 ************************************ 00:06:21.211 17:52:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.471 * Looking for test storage... 00:06:21.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.471 17:52:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:21.471 17:52:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:21.471 17:52:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.471 17:52:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.471 17:52:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.471 17:52:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.471 17:52:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.471 17:52:38 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.471 17:52:38 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.471 17:52:38 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.471 17:52:38 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.471 17:52:38 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.471 17:52:38 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.471 17:52:38 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.472 17:52:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.472 17:52:38 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.472 17:52:38 -- scripts/common.sh@344 -- # : 1 00:06:21.472 17:52:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.472 17:52:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.472 17:52:38 -- scripts/common.sh@364 -- # decimal 1 00:06:21.472 17:52:38 -- scripts/common.sh@352 -- # local d=1 00:06:21.472 17:52:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.472 17:52:38 -- scripts/common.sh@354 -- # echo 1 00:06:21.472 17:52:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.472 17:52:38 -- scripts/common.sh@365 -- # decimal 2 00:06:21.472 17:52:38 -- scripts/common.sh@352 -- # local d=2 00:06:21.472 17:52:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.472 17:52:38 -- scripts/common.sh@354 -- # echo 2 00:06:21.472 17:52:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.472 17:52:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.472 17:52:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.472 17:52:38 -- scripts/common.sh@367 -- # return 0 00:06:21.472 17:52:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.472 17:52:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.472 --rc genhtml_branch_coverage=1 00:06:21.472 --rc genhtml_function_coverage=1 00:06:21.472 --rc genhtml_legend=1 00:06:21.472 --rc geninfo_all_blocks=1 00:06:21.472 --rc geninfo_unexecuted_blocks=1 00:06:21.472 00:06:21.472 ' 00:06:21.472 17:52:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.472 --rc genhtml_branch_coverage=1 00:06:21.472 --rc genhtml_function_coverage=1 00:06:21.472 --rc genhtml_legend=1 00:06:21.472 --rc geninfo_all_blocks=1 00:06:21.472 --rc geninfo_unexecuted_blocks=1 00:06:21.472 00:06:21.472 ' 00:06:21.472 17:52:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.472 --rc genhtml_branch_coverage=1 00:06:21.472 --rc genhtml_function_coverage=1 00:06:21.472 --rc genhtml_legend=1 00:06:21.472 --rc geninfo_all_blocks=1 00:06:21.472 --rc geninfo_unexecuted_blocks=1 00:06:21.472 00:06:21.472 ' 00:06:21.472 17:52:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.472 --rc genhtml_branch_coverage=1 00:06:21.472 --rc genhtml_function_coverage=1 00:06:21.472 --rc genhtml_legend=1 00:06:21.472 --rc geninfo_all_blocks=1 00:06:21.472 --rc geninfo_unexecuted_blocks=1 00:06:21.472 00:06:21.472 ' 00:06:21.472 17:52:38 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.472 17:52:38 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.472 17:52:38 -- scheduler/scheduler.sh@35 -- # scheduler_pid=69440 00:06:21.472 17:52:38 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.472 17:52:38 -- scheduler/scheduler.sh@37 -- # waitforlisten 69440 00:06:21.472 17:52:38 -- common/autotest_common.sh@829 -- # '[' -z 69440 ']' 00:06:21.472 17:52:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.472 17:52:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.472 17:52:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.472 17:52:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.472 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:21.731 [2024-11-26 17:52:38.433789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.731 [2024-11-26 17:52:38.434318] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:06:21.731 [2024-11-26 17:52:38.598679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.731 [2024-11-26 17:52:38.646996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.731 [2024-11-26 17:52:38.647192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.731 [2024-11-26 17:52:38.647390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.731 [2024-11-26 17:52:38.647257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.670 17:52:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.670 17:52:39 -- common/autotest_common.sh@862 -- # return 0 00:06:22.670 17:52:39 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.670 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.670 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.670 POWER: Env isn't set yet! 00:06:22.670 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:22.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.670 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.670 POWER: Attempting to initialise PSTAT power management... 00:06:22.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.670 POWER: Cannot set governor of lcore 0 to performance 00:06:22.670 POWER: Attempting to initialise CPPC power management... 00:06:22.670 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.670 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.671 POWER: Attempting to initialise VM power management... 00:06:22.671 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:22.671 POWER: Unable to set Power Management Environment for lcore 0 00:06:22.671 [2024-11-26 17:52:39.235964] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:22.671 [2024-11-26 17:52:39.235987] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:22.671 [2024-11-26 17:52:39.236014] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:22.671 [2024-11-26 17:52:39.236048] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:22.671 [2024-11-26 17:52:39.236060] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:22.671 [2024-11-26 17:52:39.236072] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 [2024-11-26 17:52:39.306749] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.671 17:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.671 17:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 ************************************ 00:06:22.671 START TEST scheduler_create_thread 00:06:22.671 ************************************ 00:06:22.671 17:52:39 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 2 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 3 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 4 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 5 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 6 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 7 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 8 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 9 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 10 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.671 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.671 17:52:39 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.671 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.671 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.239 17:52:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.239 17:52:39 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.239 17:52:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.239 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.619 17:52:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.619 17:52:41 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.619 17:52:41 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.619 17:52:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.619 17:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:25.561 17:52:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.561 00:06:25.561 real 0m3.088s 00:06:25.561 ************************************ 00:06:25.561 END TEST scheduler_create_thread 00:06:25.561 ************************************ 00:06:25.561 user 0m0.018s 00:06:25.561 sys 0m0.009s 00:06:25.561 17:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.561 17:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:25.561 17:52:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.561 17:52:42 -- scheduler/scheduler.sh@46 -- # killprocess 69440 00:06:25.561 17:52:42 -- common/autotest_common.sh@936 -- # '[' -z 69440 ']' 00:06:25.561 17:52:42 -- common/autotest_common.sh@940 -- # kill -0 69440 00:06:25.561 17:52:42 -- common/autotest_common.sh@941 -- # uname 00:06:25.561 17:52:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.561 17:52:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69440 00:06:25.820 killing process with pid 69440 00:06:25.820 17:52:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:25.820 17:52:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:25.820 17:52:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69440' 00:06:25.820 17:52:42 -- common/autotest_common.sh@955 -- # kill 69440 00:06:25.820 17:52:42 -- common/autotest_common.sh@960 -- # wait 69440 00:06:26.080 [2024-11-26 17:52:42.789961] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.338 00:06:26.338 real 0m4.939s 00:06:26.338 user 0m8.959s 00:06:26.338 sys 0m0.521s 00:06:26.338 17:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.338 ************************************ 00:06:26.338 END TEST event_scheduler 00:06:26.338 ************************************ 00:06:26.338 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.338 17:52:43 -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.338 17:52:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.338 17:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.338 17:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.338 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.338 ************************************ 00:06:26.338 START TEST app_repeat 00:06:26.338 ************************************ 00:06:26.338 17:52:43 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:26.338 17:52:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.338 17:52:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.338 17:52:43 -- event/event.sh@13 -- # local nbd_list 00:06:26.338 17:52:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.338 17:52:43 -- event/event.sh@14 -- # local bdev_list 00:06:26.338 17:52:43 -- event/event.sh@15 -- # local repeat_times=4 00:06:26.338 17:52:43 -- event/event.sh@17 -- # modprobe nbd 00:06:26.338 Process app_repeat pid: 69541 00:06:26.338 17:52:43 -- event/event.sh@19 -- # repeat_pid=69541 00:06:26.338 17:52:43 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.338 17:52:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.338 17:52:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69541' 00:06:26.338 17:52:43 -- event/event.sh@23 -- # for i in {0..2} 00:06:26.338 spdk_app_start Round 0 00:06:26.338 17:52:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.338 17:52:43 -- event/event.sh@25 -- # waitforlisten 69541 /var/tmp/spdk-nbd.sock 00:06:26.338 17:52:43 -- common/autotest_common.sh@829 -- # '[' -z 69541 ']' 00:06:26.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.338 17:52:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.338 17:52:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.338 17:52:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.338 17:52:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.338 17:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:26.338 [2024-11-26 17:52:43.196243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.338 [2024-11-26 17:52:43.196362] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69541 ] 00:06:26.598 [2024-11-26 17:52:43.346487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.598 [2024-11-26 17:52:43.389933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.598 [2024-11-26 17:52:43.390017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.165 17:52:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.165 17:52:44 -- common/autotest_common.sh@862 -- # return 0 00:06:27.165 17:52:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.424 Malloc0 00:06:27.424 17:52:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.684 Malloc1 00:06:27.684 17:52:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.684 17:52:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.943 /dev/nbd0 00:06:27.943 17:52:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.943 17:52:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.943 17:52:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:27.943 17:52:44 -- common/autotest_common.sh@867 -- # local i 00:06:27.943 17:52:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.943 17:52:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.943 17:52:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:27.943 17:52:44 -- common/autotest_common.sh@871 -- # break 00:06:27.943 17:52:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.943 17:52:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.943 17:52:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.943 1+0 records in 00:06:27.943 1+0 records out 00:06:27.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251647 s, 16.3 MB/s 00:06:27.943 17:52:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.943 17:52:44 -- common/autotest_common.sh@884 -- # size=4096 00:06:27.943 17:52:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.943 17:52:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.943 17:52:44 -- common/autotest_common.sh@887 -- # return 0 00:06:27.943 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.943 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.943 17:52:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.203 /dev/nbd1 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.203 17:52:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.203 17:52:44 -- common/autotest_common.sh@867 -- # local i 00:06:28.203 17:52:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.203 17:52:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.203 17:52:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.203 17:52:44 -- common/autotest_common.sh@871 -- # break 00:06:28.203 17:52:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.203 17:52:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.203 17:52:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.203 1+0 records in 00:06:28.203 1+0 records out 00:06:28.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386073 s, 10.6 MB/s 00:06:28.203 17:52:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.203 17:52:44 -- common/autotest_common.sh@884 -- # size=4096 00:06:28.203 17:52:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.203 17:52:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.203 17:52:44 -- common/autotest_common.sh@887 -- # return 0 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.203 17:52:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.463 { 00:06:28.463 "nbd_device": "/dev/nbd0", 00:06:28.463 "bdev_name": "Malloc0" 00:06:28.463 }, 00:06:28.463 { 00:06:28.463 "nbd_device": "/dev/nbd1", 00:06:28.463 "bdev_name": "Malloc1" 00:06:28.463 } 00:06:28.463 ]' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.463 { 00:06:28.463 "nbd_device": "/dev/nbd0", 00:06:28.463 "bdev_name": "Malloc0" 00:06:28.463 }, 00:06:28.463 { 00:06:28.463 "nbd_device": "/dev/nbd1", 00:06:28.463 "bdev_name": "Malloc1" 00:06:28.463 } 00:06:28.463 ]' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.463 /dev/nbd1' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.463 /dev/nbd1' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.463 256+0 records in 00:06:28.463 256+0 records out 00:06:28.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122853 s, 85.4 MB/s 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.463 256+0 records in 00:06:28.463 256+0 records out 00:06:28.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306274 s, 34.2 MB/s 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.463 256+0 records in 00:06:28.463 256+0 records out 00:06:28.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285928 s, 36.7 MB/s 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.463 17:52:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.723 17:52:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.723 17:52:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.723 17:52:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@41 -- # break 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.724 17:52:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@41 -- # break 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.983 17:52:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.242 17:52:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.242 17:52:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.242 17:52:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@65 -- # true 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.242 17:52:46 -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.242 17:52:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.501 17:52:46 -- event/event.sh@35 -- # sleep 3 00:06:29.760 [2024-11-26 17:52:46.429833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.760 [2024-11-26 17:52:46.468212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.760 [2024-11-26 17:52:46.468212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.760 [2024-11-26 17:52:46.511680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.760 [2024-11-26 17:52:46.511734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.053 17:52:49 -- event/event.sh@23 -- # for i in {0..2} 00:06:33.053 spdk_app_start Round 1 00:06:33.053 17:52:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:33.053 17:52:49 -- event/event.sh@25 -- # waitforlisten 69541 /var/tmp/spdk-nbd.sock 00:06:33.053 17:52:49 -- common/autotest_common.sh@829 -- # '[' -z 69541 ']' 00:06:33.053 17:52:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.053 17:52:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.053 17:52:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.053 17:52:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.053 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.053 17:52:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.053 17:52:49 -- common/autotest_common.sh@862 -- # return 0 00:06:33.053 17:52:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.053 Malloc0 00:06:33.053 17:52:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.053 Malloc1 00:06:33.053 17:52:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@12 -- # local i 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.053 17:52:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.311 /dev/nbd0 00:06:33.311 17:52:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.311 17:52:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.311 17:52:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:33.311 17:52:50 -- common/autotest_common.sh@867 -- # local i 00:06:33.311 17:52:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.311 17:52:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.311 17:52:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:33.311 17:52:50 -- common/autotest_common.sh@871 -- # break 00:06:33.311 17:52:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.311 17:52:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.311 17:52:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.311 1+0 records in 00:06:33.311 1+0 records out 00:06:33.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229695 s, 17.8 MB/s 00:06:33.311 17:52:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.311 17:52:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:33.311 17:52:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.311 17:52:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.311 17:52:50 -- common/autotest_common.sh@887 -- # return 0 00:06:33.311 17:52:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.311 17:52:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.311 17:52:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.568 /dev/nbd1 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.568 17:52:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:33.568 17:52:50 -- common/autotest_common.sh@867 -- # local i 00:06:33.568 17:52:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.568 17:52:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.568 17:52:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:33.568 17:52:50 -- common/autotest_common.sh@871 -- # break 00:06:33.568 17:52:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.568 17:52:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.568 17:52:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.568 1+0 records in 00:06:33.568 1+0 records out 00:06:33.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039717 s, 10.3 MB/s 00:06:33.568 17:52:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.568 17:52:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:33.568 17:52:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.568 17:52:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.568 17:52:50 -- common/autotest_common.sh@887 -- # return 0 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.568 17:52:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.826 { 00:06:33.826 "nbd_device": "/dev/nbd0", 00:06:33.826 "bdev_name": "Malloc0" 00:06:33.826 }, 00:06:33.826 { 00:06:33.826 "nbd_device": "/dev/nbd1", 00:06:33.826 "bdev_name": "Malloc1" 00:06:33.826 } 00:06:33.826 ]' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.826 { 00:06:33.826 "nbd_device": "/dev/nbd0", 00:06:33.826 "bdev_name": "Malloc0" 00:06:33.826 }, 00:06:33.826 { 00:06:33.826 "nbd_device": "/dev/nbd1", 00:06:33.826 "bdev_name": "Malloc1" 00:06:33.826 } 00:06:33.826 ]' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.826 /dev/nbd1' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.826 /dev/nbd1' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.826 256+0 records in 00:06:33.826 256+0 records out 00:06:33.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121797 s, 86.1 MB/s 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.826 256+0 records in 00:06:33.826 256+0 records out 00:06:33.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268916 s, 39.0 MB/s 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.826 256+0 records in 00:06:33.826 256+0 records out 00:06:33.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279042 s, 37.6 MB/s 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.826 17:52:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@41 -- # break 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.084 17:52:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@41 -- # break 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.342 17:52:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.600 17:52:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@65 -- # true 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.601 17:52:51 -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.601 17:52:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.859 17:52:51 -- event/event.sh@35 -- # sleep 3 00:06:34.859 [2024-11-26 17:52:51.747832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.117 [2024-11-26 17:52:51.785765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.118 [2024-11-26 17:52:51.785784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.118 [2024-11-26 17:52:51.829579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.118 [2024-11-26 17:52:51.829660] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.716 spdk_app_start Round 2 00:06:37.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.716 17:52:54 -- event/event.sh@23 -- # for i in {0..2} 00:06:37.716 17:52:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.716 17:52:54 -- event/event.sh@25 -- # waitforlisten 69541 /var/tmp/spdk-nbd.sock 00:06:37.716 17:52:54 -- common/autotest_common.sh@829 -- # '[' -z 69541 ']' 00:06:37.716 17:52:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.716 17:52:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.716 17:52:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.716 17:52:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.716 17:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:37.975 17:52:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.975 17:52:54 -- common/autotest_common.sh@862 -- # return 0 00:06:37.975 17:52:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.233 Malloc0 00:06:38.233 17:52:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.491 Malloc1 00:06:38.491 17:52:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.491 17:52:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@12 -- # local i 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.492 /dev/nbd0 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.492 17:52:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.492 17:52:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.492 17:52:55 -- common/autotest_common.sh@867 -- # local i 00:06:38.492 17:52:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.492 17:52:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.492 17:52:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.751 17:52:55 -- common/autotest_common.sh@871 -- # break 00:06:38.751 17:52:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.751 1+0 records in 00:06:38.751 1+0 records out 00:06:38.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212854 s, 19.2 MB/s 00:06:38.751 17:52:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.751 17:52:55 -- common/autotest_common.sh@884 -- # size=4096 00:06:38.751 17:52:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.751 17:52:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.751 17:52:55 -- common/autotest_common.sh@887 -- # return 0 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.751 /dev/nbd1 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.751 17:52:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.751 17:52:55 -- common/autotest_common.sh@867 -- # local i 00:06:38.751 17:52:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.751 17:52:55 -- common/autotest_common.sh@871 -- # break 00:06:38.751 17:52:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.751 17:52:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.751 1+0 records in 00:06:38.751 1+0 records out 00:06:38.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216335 s, 18.9 MB/s 00:06:38.751 17:52:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.751 17:52:55 -- common/autotest_common.sh@884 -- # size=4096 00:06:38.751 17:52:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.751 17:52:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.751 17:52:55 -- common/autotest_common.sh@887 -- # return 0 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.751 17:52:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.010 { 00:06:39.010 "nbd_device": "/dev/nbd0", 00:06:39.010 "bdev_name": "Malloc0" 00:06:39.010 }, 00:06:39.010 { 00:06:39.010 "nbd_device": "/dev/nbd1", 00:06:39.010 "bdev_name": "Malloc1" 00:06:39.010 } 00:06:39.010 ]' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.010 { 00:06:39.010 "nbd_device": "/dev/nbd0", 00:06:39.010 "bdev_name": "Malloc0" 00:06:39.010 }, 00:06:39.010 { 00:06:39.010 "nbd_device": "/dev/nbd1", 00:06:39.010 "bdev_name": "Malloc1" 00:06:39.010 } 00:06:39.010 ]' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.010 /dev/nbd1' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.010 /dev/nbd1' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.010 256+0 records in 00:06:39.010 256+0 records out 00:06:39.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114108 s, 91.9 MB/s 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.010 17:52:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.269 256+0 records in 00:06:39.269 256+0 records out 00:06:39.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259211 s, 40.5 MB/s 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.269 256+0 records in 00:06:39.269 256+0 records out 00:06:39.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0356311 s, 29.4 MB/s 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.269 17:52:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.270 17:52:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.270 17:52:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@51 -- # local i 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.270 17:52:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@41 -- # break 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@41 -- # break 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.528 17:52:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@65 -- # true 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.787 17:52:56 -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.787 17:52:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.045 17:52:56 -- event/event.sh@35 -- # sleep 3 00:06:40.304 [2024-11-26 17:52:57.078223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.305 [2024-11-26 17:52:57.122885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.305 [2024-11-26 17:52:57.122889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.305 [2024-11-26 17:52:57.166631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.305 [2024-11-26 17:52:57.166695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.640 17:52:59 -- event/event.sh@38 -- # waitforlisten 69541 /var/tmp/spdk-nbd.sock 00:06:43.640 17:52:59 -- common/autotest_common.sh@829 -- # '[' -z 69541 ']' 00:06:43.640 17:52:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.640 17:52:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.640 17:52:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.640 17:52:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.640 17:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 17:53:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.640 17:53:00 -- common/autotest_common.sh@862 -- # return 0 00:06:43.640 17:53:00 -- event/event.sh@39 -- # killprocess 69541 00:06:43.640 17:53:00 -- common/autotest_common.sh@936 -- # '[' -z 69541 ']' 00:06:43.640 17:53:00 -- common/autotest_common.sh@940 -- # kill -0 69541 00:06:43.640 17:53:00 -- common/autotest_common.sh@941 -- # uname 00:06:43.640 17:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.640 17:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69541 00:06:43.640 killing process with pid 69541 00:06:43.640 17:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.640 17:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.640 17:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69541' 00:06:43.640 17:53:00 -- common/autotest_common.sh@955 -- # kill 69541 00:06:43.640 17:53:00 -- common/autotest_common.sh@960 -- # wait 69541 00:06:43.640 spdk_app_start is called in Round 0. 00:06:43.640 Shutdown signal received, stop current app iteration 00:06:43.640 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:43.640 spdk_app_start is called in Round 1. 00:06:43.640 Shutdown signal received, stop current app iteration 00:06:43.640 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:43.640 spdk_app_start is called in Round 2. 00:06:43.640 Shutdown signal received, stop current app iteration 00:06:43.640 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:43.640 spdk_app_start is called in Round 3. 00:06:43.640 Shutdown signal received, stop current app iteration 00:06:43.640 17:53:00 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.640 17:53:00 -- event/event.sh@42 -- # return 0 00:06:43.640 00:06:43.640 real 0m17.287s 00:06:43.640 user 0m37.498s 00:06:43.640 sys 0m3.038s 00:06:43.640 17:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.640 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 ************************************ 00:06:43.640 END TEST app_repeat 00:06:43.640 ************************************ 00:06:43.640 17:53:00 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.640 17:53:00 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.640 17:53:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.640 17:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.640 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 ************************************ 00:06:43.640 START TEST cpu_locks 00:06:43.640 ************************************ 00:06:43.640 17:53:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.900 * Looking for test storage... 00:06:43.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.900 17:53:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.900 17:53:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.900 17:53:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.900 17:53:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.900 17:53:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.900 17:53:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.900 17:53:00 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.900 17:53:00 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.900 17:53:00 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.900 17:53:00 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.900 17:53:00 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.900 17:53:00 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.900 17:53:00 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.900 17:53:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.900 17:53:00 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.900 17:53:00 -- scripts/common.sh@344 -- # : 1 00:06:43.900 17:53:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.900 17:53:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.900 17:53:00 -- scripts/common.sh@364 -- # decimal 1 00:06:43.900 17:53:00 -- scripts/common.sh@352 -- # local d=1 00:06:43.900 17:53:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.900 17:53:00 -- scripts/common.sh@354 -- # echo 1 00:06:43.900 17:53:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.900 17:53:00 -- scripts/common.sh@365 -- # decimal 2 00:06:43.900 17:53:00 -- scripts/common.sh@352 -- # local d=2 00:06:43.900 17:53:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.900 17:53:00 -- scripts/common.sh@354 -- # echo 2 00:06:43.900 17:53:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.900 17:53:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.900 17:53:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.900 17:53:00 -- scripts/common.sh@367 -- # return 0 00:06:43.900 17:53:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.900 --rc genhtml_branch_coverage=1 00:06:43.900 --rc genhtml_function_coverage=1 00:06:43.900 --rc genhtml_legend=1 00:06:43.900 --rc geninfo_all_blocks=1 00:06:43.900 --rc geninfo_unexecuted_blocks=1 00:06:43.900 00:06:43.900 ' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.900 --rc genhtml_branch_coverage=1 00:06:43.900 --rc genhtml_function_coverage=1 00:06:43.900 --rc genhtml_legend=1 00:06:43.900 --rc geninfo_all_blocks=1 00:06:43.900 --rc geninfo_unexecuted_blocks=1 00:06:43.900 00:06:43.900 ' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.900 --rc genhtml_branch_coverage=1 00:06:43.900 --rc genhtml_function_coverage=1 00:06:43.900 --rc genhtml_legend=1 00:06:43.900 --rc geninfo_all_blocks=1 00:06:43.900 --rc geninfo_unexecuted_blocks=1 00:06:43.900 00:06:43.900 ' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.900 --rc genhtml_branch_coverage=1 00:06:43.900 --rc genhtml_function_coverage=1 00:06:43.900 --rc genhtml_legend=1 00:06:43.900 --rc geninfo_all_blocks=1 00:06:43.900 --rc geninfo_unexecuted_blocks=1 00:06:43.900 00:06:43.900 ' 00:06:43.900 17:53:00 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.900 17:53:00 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.900 17:53:00 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.900 17:53:00 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.900 17:53:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.900 17:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.900 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.900 ************************************ 00:06:43.900 START TEST default_locks 00:06:43.900 ************************************ 00:06:43.900 17:53:00 -- common/autotest_common.sh@1114 -- # default_locks 00:06:43.900 17:53:00 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69967 00:06:43.900 17:53:00 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.900 17:53:00 -- event/cpu_locks.sh@47 -- # waitforlisten 69967 00:06:43.900 17:53:00 -- common/autotest_common.sh@829 -- # '[' -z 69967 ']' 00:06:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.900 17:53:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.900 17:53:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.900 17:53:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.900 17:53:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.900 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:44.160 [2024-11-26 17:53:00.832548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.160 [2024-11-26 17:53:00.832665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69967 ] 00:06:44.160 [2024-11-26 17:53:00.981561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.160 [2024-11-26 17:53:01.023625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.160 [2024-11-26 17:53:01.023833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.728 17:53:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.728 17:53:01 -- common/autotest_common.sh@862 -- # return 0 00:06:44.728 17:53:01 -- event/cpu_locks.sh@49 -- # locks_exist 69967 00:06:44.728 17:53:01 -- event/cpu_locks.sh@22 -- # lslocks -p 69967 00:06:44.728 17:53:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.297 17:53:02 -- event/cpu_locks.sh@50 -- # killprocess 69967 00:06:45.297 17:53:02 -- common/autotest_common.sh@936 -- # '[' -z 69967 ']' 00:06:45.297 17:53:02 -- common/autotest_common.sh@940 -- # kill -0 69967 00:06:45.297 17:53:02 -- common/autotest_common.sh@941 -- # uname 00:06:45.298 17:53:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.298 17:53:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69967 00:06:45.298 killing process with pid 69967 00:06:45.298 17:53:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.298 17:53:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.298 17:53:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69967' 00:06:45.298 17:53:02 -- common/autotest_common.sh@955 -- # kill 69967 00:06:45.298 17:53:02 -- common/autotest_common.sh@960 -- # wait 69967 00:06:45.866 17:53:02 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69967 00:06:45.866 17:53:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:45.866 17:53:02 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69967 00:06:45.866 17:53:02 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.866 17:53:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.866 17:53:02 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.866 17:53:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.866 17:53:02 -- common/autotest_common.sh@653 -- # waitforlisten 69967 00:06:45.866 17:53:02 -- common/autotest_common.sh@829 -- # '[' -z 69967 ']' 00:06:45.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.866 ERROR: process (pid: 69967) is no longer running 00:06:45.866 17:53:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.866 17:53:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.866 17:53:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.866 17:53:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.866 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69967) - No such process 00:06:45.866 17:53:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.867 17:53:02 -- common/autotest_common.sh@862 -- # return 1 00:06:45.867 17:53:02 -- common/autotest_common.sh@653 -- # es=1 00:06:45.867 17:53:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.867 17:53:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.867 17:53:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.867 17:53:02 -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.867 17:53:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.867 17:53:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.867 17:53:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.867 00:06:45.867 real 0m1.775s 00:06:45.867 user 0m1.753s 00:06:45.867 sys 0m0.624s 00:06:45.867 17:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.867 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 ************************************ 00:06:45.867 END TEST default_locks 00:06:45.867 ************************************ 00:06:45.867 17:53:02 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.867 17:53:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.867 17:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.867 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 ************************************ 00:06:45.867 START TEST default_locks_via_rpc 00:06:45.867 ************************************ 00:06:45.867 17:53:02 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:45.867 17:53:02 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70014 00:06:45.867 17:53:02 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.867 17:53:02 -- event/cpu_locks.sh@63 -- # waitforlisten 70014 00:06:45.867 17:53:02 -- common/autotest_common.sh@829 -- # '[' -z 70014 ']' 00:06:45.867 17:53:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.867 17:53:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.867 17:53:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.867 17:53:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.867 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.867 [2024-11-26 17:53:02.691175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.867 [2024-11-26 17:53:02.691496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:06:46.126 [2024-11-26 17:53:02.840880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.126 [2024-11-26 17:53:02.882630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.126 [2024-11-26 17:53:02.882824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.695 17:53:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.695 17:53:03 -- common/autotest_common.sh@862 -- # return 0 00:06:46.695 17:53:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.695 17:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.695 17:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:46.695 17:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.695 17:53:03 -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.695 17:53:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.695 17:53:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.695 17:53:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.695 17:53:03 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.695 17:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.695 17:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:46.695 17:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.695 17:53:03 -- event/cpu_locks.sh@71 -- # locks_exist 70014 00:06:46.695 17:53:03 -- event/cpu_locks.sh@22 -- # lslocks -p 70014 00:06:46.695 17:53:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.264 17:53:04 -- event/cpu_locks.sh@73 -- # killprocess 70014 00:06:47.264 17:53:04 -- common/autotest_common.sh@936 -- # '[' -z 70014 ']' 00:06:47.264 17:53:04 -- common/autotest_common.sh@940 -- # kill -0 70014 00:06:47.264 17:53:04 -- common/autotest_common.sh@941 -- # uname 00:06:47.264 17:53:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.264 17:53:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70014 00:06:47.264 killing process with pid 70014 00:06:47.264 17:53:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.264 17:53:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.264 17:53:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70014' 00:06:47.264 17:53:04 -- common/autotest_common.sh@955 -- # kill 70014 00:06:47.264 17:53:04 -- common/autotest_common.sh@960 -- # wait 70014 00:06:47.523 ************************************ 00:06:47.523 END TEST default_locks_via_rpc 00:06:47.523 ************************************ 00:06:47.523 00:06:47.524 real 0m1.854s 00:06:47.524 user 0m1.835s 00:06:47.524 sys 0m0.650s 00:06:47.524 17:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.524 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.782 17:53:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.782 17:53:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.782 17:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.782 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.782 ************************************ 00:06:47.782 START TEST non_locking_app_on_locked_coremask 00:06:47.782 ************************************ 00:06:47.782 17:53:04 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:47.782 17:53:04 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.782 17:53:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70066 00:06:47.782 17:53:04 -- event/cpu_locks.sh@81 -- # waitforlisten 70066 /var/tmp/spdk.sock 00:06:47.782 17:53:04 -- common/autotest_common.sh@829 -- # '[' -z 70066 ']' 00:06:47.782 17:53:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.782 17:53:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.782 17:53:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.782 17:53:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.782 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.782 [2024-11-26 17:53:04.621585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.783 [2024-11-26 17:53:04.621837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70066 ] 00:06:48.042 [2024-11-26 17:53:04.773616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.042 [2024-11-26 17:53:04.844213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.042 [2024-11-26 17:53:04.844451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.611 17:53:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.611 17:53:05 -- common/autotest_common.sh@862 -- # return 0 00:06:48.611 17:53:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.611 17:53:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70077 00:06:48.611 17:53:05 -- event/cpu_locks.sh@85 -- # waitforlisten 70077 /var/tmp/spdk2.sock 00:06:48.611 17:53:05 -- common/autotest_common.sh@829 -- # '[' -z 70077 ']' 00:06:48.611 17:53:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.611 17:53:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.611 17:53:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.611 17:53:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.611 17:53:05 -- common/autotest_common.sh@10 -- # set +x 00:06:48.611 [2024-11-26 17:53:05.524613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:48.611 [2024-11-26 17:53:05.524757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:06:48.870 [2024-11-26 17:53:05.674134] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.870 [2024-11-26 17:53:05.674230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.870 [2024-11-26 17:53:05.770470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.870 [2024-11-26 17:53:05.770699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.807 17:53:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.807 17:53:06 -- common/autotest_common.sh@862 -- # return 0 00:06:49.807 17:53:06 -- event/cpu_locks.sh@87 -- # locks_exist 70066 00:06:49.807 17:53:06 -- event/cpu_locks.sh@22 -- # lslocks -p 70066 00:06:49.807 17:53:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.375 17:53:07 -- event/cpu_locks.sh@89 -- # killprocess 70066 00:06:50.375 17:53:07 -- common/autotest_common.sh@936 -- # '[' -z 70066 ']' 00:06:50.375 17:53:07 -- common/autotest_common.sh@940 -- # kill -0 70066 00:06:50.375 17:53:07 -- common/autotest_common.sh@941 -- # uname 00:06:50.375 17:53:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.375 17:53:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70066 00:06:50.634 17:53:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.634 killing process with pid 70066 00:06:50.634 17:53:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.634 17:53:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70066' 00:06:50.634 17:53:07 -- common/autotest_common.sh@955 -- # kill 70066 00:06:50.634 17:53:07 -- common/autotest_common.sh@960 -- # wait 70066 00:06:51.201 17:53:08 -- event/cpu_locks.sh@90 -- # killprocess 70077 00:06:51.201 17:53:08 -- common/autotest_common.sh@936 -- # '[' -z 70077 ']' 00:06:51.201 17:53:08 -- common/autotest_common.sh@940 -- # kill -0 70077 00:06:51.201 17:53:08 -- common/autotest_common.sh@941 -- # uname 00:06:51.201 17:53:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.201 17:53:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70077 00:06:51.201 17:53:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.201 killing process with pid 70077 00:06:51.201 17:53:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.201 17:53:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70077' 00:06:51.201 17:53:08 -- common/autotest_common.sh@955 -- # kill 70077 00:06:51.201 17:53:08 -- common/autotest_common.sh@960 -- # wait 70077 00:06:51.769 00:06:51.769 real 0m3.965s 00:06:51.769 user 0m4.045s 00:06:51.769 sys 0m1.365s 00:06:51.769 ************************************ 00:06:51.769 END TEST non_locking_app_on_locked_coremask 00:06:51.769 ************************************ 00:06:51.769 17:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.769 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 17:53:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.769 17:53:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.769 17:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.769 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 ************************************ 00:06:51.769 START TEST locking_app_on_unlocked_coremask 00:06:51.769 ************************************ 00:06:51.769 17:53:08 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:51.769 17:53:08 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70146 00:06:51.769 17:53:08 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.769 17:53:08 -- event/cpu_locks.sh@99 -- # waitforlisten 70146 /var/tmp/spdk.sock 00:06:51.769 17:53:08 -- common/autotest_common.sh@829 -- # '[' -z 70146 ']' 00:06:51.769 17:53:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.769 17:53:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.769 17:53:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.769 17:53:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.769 17:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 [2024-11-26 17:53:08.642912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:51.769 [2024-11-26 17:53:08.643061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70146 ] 00:06:52.028 [2024-11-26 17:53:08.792784] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.028 [2024-11-26 17:53:08.792864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.028 [2024-11-26 17:53:08.842991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.028 [2024-11-26 17:53:08.843219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.596 17:53:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.596 17:53:09 -- common/autotest_common.sh@862 -- # return 0 00:06:52.596 17:53:09 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.596 17:53:09 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70162 00:06:52.596 17:53:09 -- event/cpu_locks.sh@103 -- # waitforlisten 70162 /var/tmp/spdk2.sock 00:06:52.596 17:53:09 -- common/autotest_common.sh@829 -- # '[' -z 70162 ']' 00:06:52.596 17:53:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.596 17:53:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.596 17:53:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.596 17:53:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.596 17:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.855 [2024-11-26 17:53:09.579393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.855 [2024-11-26 17:53:09.579526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70162 ] 00:06:52.855 [2024-11-26 17:53:09.726987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.114 [2024-11-26 17:53:09.827844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.114 [2024-11-26 17:53:09.828039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.682 17:53:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.682 17:53:10 -- common/autotest_common.sh@862 -- # return 0 00:06:53.682 17:53:10 -- event/cpu_locks.sh@105 -- # locks_exist 70162 00:06:53.682 17:53:10 -- event/cpu_locks.sh@22 -- # lslocks -p 70162 00:06:53.682 17:53:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.618 17:53:11 -- event/cpu_locks.sh@107 -- # killprocess 70146 00:06:54.618 17:53:11 -- common/autotest_common.sh@936 -- # '[' -z 70146 ']' 00:06:54.618 17:53:11 -- common/autotest_common.sh@940 -- # kill -0 70146 00:06:54.618 17:53:11 -- common/autotest_common.sh@941 -- # uname 00:06:54.618 17:53:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.618 17:53:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70146 00:06:54.618 17:53:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.618 17:53:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.618 17:53:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70146' 00:06:54.618 killing process with pid 70146 00:06:54.618 17:53:11 -- common/autotest_common.sh@955 -- # kill 70146 00:06:54.618 17:53:11 -- common/autotest_common.sh@960 -- # wait 70146 00:06:55.556 17:53:12 -- event/cpu_locks.sh@108 -- # killprocess 70162 00:06:55.556 17:53:12 -- common/autotest_common.sh@936 -- # '[' -z 70162 ']' 00:06:55.556 17:53:12 -- common/autotest_common.sh@940 -- # kill -0 70162 00:06:55.556 17:53:12 -- common/autotest_common.sh@941 -- # uname 00:06:55.556 17:53:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.556 17:53:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70162 00:06:55.556 17:53:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.556 killing process with pid 70162 00:06:55.556 17:53:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.556 17:53:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70162' 00:06:55.556 17:53:12 -- common/autotest_common.sh@955 -- # kill 70162 00:06:55.556 17:53:12 -- common/autotest_common.sh@960 -- # wait 70162 00:06:55.816 00:06:55.816 real 0m4.038s 00:06:55.816 user 0m4.337s 00:06:55.816 sys 0m1.249s 00:06:55.816 17:53:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.816 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:55.816 ************************************ 00:06:55.816 END TEST locking_app_on_unlocked_coremask 00:06:55.816 ************************************ 00:06:55.816 17:53:12 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.816 17:53:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.816 17:53:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.816 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:55.816 ************************************ 00:06:55.816 START TEST locking_app_on_locked_coremask 00:06:55.816 ************************************ 00:06:55.816 17:53:12 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:55.816 17:53:12 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70231 00:06:55.816 17:53:12 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.816 17:53:12 -- event/cpu_locks.sh@116 -- # waitforlisten 70231 /var/tmp/spdk.sock 00:06:55.816 17:53:12 -- common/autotest_common.sh@829 -- # '[' -z 70231 ']' 00:06:55.816 17:53:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.816 17:53:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.816 17:53:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.816 17:53:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.816 17:53:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.075 [2024-11-26 17:53:12.758172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.075 [2024-11-26 17:53:12.758321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:06:56.075 [2024-11-26 17:53:12.911294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.075 [2024-11-26 17:53:12.960513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.075 [2024-11-26 17:53:12.960703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.010 17:53:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.010 17:53:13 -- common/autotest_common.sh@862 -- # return 0 00:06:57.010 17:53:13 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70249 00:06:57.010 17:53:13 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70249 /var/tmp/spdk2.sock 00:06:57.010 17:53:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.010 17:53:13 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.010 17:53:13 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70249 /var/tmp/spdk2.sock 00:06:57.010 17:53:13 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.010 17:53:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.010 17:53:13 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.010 17:53:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.010 17:53:13 -- common/autotest_common.sh@653 -- # waitforlisten 70249 /var/tmp/spdk2.sock 00:06:57.010 17:53:13 -- common/autotest_common.sh@829 -- # '[' -z 70249 ']' 00:06:57.010 17:53:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.010 17:53:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.010 17:53:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.010 17:53:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.010 17:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.010 [2024-11-26 17:53:13.688196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.010 [2024-11-26 17:53:13.688379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70249 ] 00:06:57.010 [2024-11-26 17:53:13.846262] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70231 has claimed it. 00:06:57.010 [2024-11-26 17:53:13.846341] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.579 ERROR: process (pid: 70249) is no longer running 00:06:57.579 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70249) - No such process 00:06:57.579 17:53:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.579 17:53:14 -- common/autotest_common.sh@862 -- # return 1 00:06:57.579 17:53:14 -- common/autotest_common.sh@653 -- # es=1 00:06:57.579 17:53:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.579 17:53:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.579 17:53:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.579 17:53:14 -- event/cpu_locks.sh@122 -- # locks_exist 70231 00:06:57.579 17:53:14 -- event/cpu_locks.sh@22 -- # lslocks -p 70231 00:06:57.579 17:53:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.147 17:53:14 -- event/cpu_locks.sh@124 -- # killprocess 70231 00:06:58.147 17:53:14 -- common/autotest_common.sh@936 -- # '[' -z 70231 ']' 00:06:58.147 17:53:14 -- common/autotest_common.sh@940 -- # kill -0 70231 00:06:58.147 17:53:14 -- common/autotest_common.sh@941 -- # uname 00:06:58.147 17:53:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.147 17:53:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70231 00:06:58.147 17:53:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.147 17:53:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.147 killing process with pid 70231 00:06:58.147 17:53:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70231' 00:06:58.147 17:53:14 -- common/autotest_common.sh@955 -- # kill 70231 00:06:58.147 17:53:14 -- common/autotest_common.sh@960 -- # wait 70231 00:06:58.406 00:06:58.406 real 0m2.558s 00:06:58.406 user 0m2.754s 00:06:58.406 sys 0m0.809s 00:06:58.406 ************************************ 00:06:58.406 END TEST locking_app_on_locked_coremask 00:06:58.406 ************************************ 00:06:58.406 17:53:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.406 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.406 17:53:15 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.406 17:53:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.406 17:53:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.406 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.406 ************************************ 00:06:58.406 START TEST locking_overlapped_coremask 00:06:58.406 ************************************ 00:06:58.406 17:53:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:58.406 17:53:15 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70303 00:06:58.406 17:53:15 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.406 17:53:15 -- event/cpu_locks.sh@133 -- # waitforlisten 70303 /var/tmp/spdk.sock 00:06:58.406 17:53:15 -- common/autotest_common.sh@829 -- # '[' -z 70303 ']' 00:06:58.406 17:53:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.406 17:53:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.406 17:53:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.406 17:53:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.406 17:53:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.665 [2024-11-26 17:53:15.381117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.665 [2024-11-26 17:53:15.381249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70303 ] 00:06:58.665 [2024-11-26 17:53:15.519853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.665 [2024-11-26 17:53:15.569714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.665 [2024-11-26 17:53:15.570260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.665 [2024-11-26 17:53:15.570359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.665 [2024-11-26 17:53:15.570485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.601 17:53:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.601 17:53:16 -- common/autotest_common.sh@862 -- # return 0 00:06:59.601 17:53:16 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70321 00:06:59.601 17:53:16 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70321 /var/tmp/spdk2.sock 00:06:59.601 17:53:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:59.601 17:53:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70321 /var/tmp/spdk2.sock 00:06:59.601 17:53:16 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.601 17:53:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:59.601 17:53:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.601 17:53:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:59.601 17:53:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.601 17:53:16 -- common/autotest_common.sh@653 -- # waitforlisten 70321 /var/tmp/spdk2.sock 00:06:59.601 17:53:16 -- common/autotest_common.sh@829 -- # '[' -z 70321 ']' 00:06:59.601 17:53:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.601 17:53:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.601 17:53:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.601 17:53:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.601 17:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:59.601 [2024-11-26 17:53:16.313614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.601 [2024-11-26 17:53:16.313767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70321 ] 00:06:59.601 [2024-11-26 17:53:16.464668] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70303 has claimed it. 00:06:59.601 [2024-11-26 17:53:16.464757] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.169 ERROR: process (pid: 70321) is no longer running 00:07:00.169 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70321) - No such process 00:07:00.169 17:53:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.169 17:53:16 -- common/autotest_common.sh@862 -- # return 1 00:07:00.169 17:53:16 -- common/autotest_common.sh@653 -- # es=1 00:07:00.169 17:53:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.169 17:53:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.169 17:53:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.169 17:53:16 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.169 17:53:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.169 17:53:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.169 17:53:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.169 17:53:16 -- event/cpu_locks.sh@141 -- # killprocess 70303 00:07:00.169 17:53:16 -- common/autotest_common.sh@936 -- # '[' -z 70303 ']' 00:07:00.169 17:53:16 -- common/autotest_common.sh@940 -- # kill -0 70303 00:07:00.169 17:53:16 -- common/autotest_common.sh@941 -- # uname 00:07:00.169 17:53:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.169 17:53:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70303 00:07:00.169 17:53:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.169 17:53:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.169 17:53:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70303' 00:07:00.169 killing process with pid 70303 00:07:00.169 17:53:16 -- common/autotest_common.sh@955 -- # kill 70303 00:07:00.169 17:53:16 -- common/autotest_common.sh@960 -- # wait 70303 00:07:00.735 00:07:00.735 real 0m2.094s 00:07:00.735 user 0m5.569s 00:07:00.735 sys 0m0.565s 00:07:00.735 17:53:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.735 ************************************ 00:07:00.735 END TEST locking_overlapped_coremask 00:07:00.735 ************************************ 00:07:00.735 17:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:00.735 17:53:17 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:00.735 17:53:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.735 17:53:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.735 17:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:00.735 ************************************ 00:07:00.735 START TEST locking_overlapped_coremask_via_rpc 00:07:00.735 ************************************ 00:07:00.735 17:53:17 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:00.735 17:53:17 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70363 00:07:00.735 17:53:17 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:00.735 17:53:17 -- event/cpu_locks.sh@149 -- # waitforlisten 70363 /var/tmp/spdk.sock 00:07:00.735 17:53:17 -- common/autotest_common.sh@829 -- # '[' -z 70363 ']' 00:07:00.735 17:53:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.735 17:53:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.735 17:53:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.735 17:53:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.735 17:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:00.736 [2024-11-26 17:53:17.551302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.736 [2024-11-26 17:53:17.551458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:07:00.993 [2024-11-26 17:53:17.689403] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.993 [2024-11-26 17:53:17.689480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.993 [2024-11-26 17:53:17.739529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.993 [2024-11-26 17:53:17.740057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.993 [2024-11-26 17:53:17.739969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.993 [2024-11-26 17:53:17.740183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.561 17:53:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.561 17:53:18 -- common/autotest_common.sh@862 -- # return 0 00:07:01.561 17:53:18 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70381 00:07:01.561 17:53:18 -- event/cpu_locks.sh@153 -- # waitforlisten 70381 /var/tmp/spdk2.sock 00:07:01.561 17:53:18 -- common/autotest_common.sh@829 -- # '[' -z 70381 ']' 00:07:01.561 17:53:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.561 17:53:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.561 17:53:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.561 17:53:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.561 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:01.561 17:53:18 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:01.561 [2024-11-26 17:53:18.475317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.561 [2024-11-26 17:53:18.475474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70381 ] 00:07:01.820 [2024-11-26 17:53:18.625004] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.820 [2024-11-26 17:53:18.625082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.820 [2024-11-26 17:53:18.721196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.820 [2024-11-26 17:53:18.721638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.820 [2024-11-26 17:53:18.721710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.820 [2024-11-26 17:53:18.722372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.388 17:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.388 17:53:19 -- common/autotest_common.sh@862 -- # return 0 00:07:02.388 17:53:19 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.388 17:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.388 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:02.388 17:53:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.388 17:53:19 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.388 17:53:19 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.388 17:53:19 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.388 17:53:19 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:02.388 17:53:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.388 17:53:19 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:02.388 17:53:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.388 17:53:19 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.388 17:53:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.388 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:02.388 [2024-11-26 17:53:19.302678] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70363 has claimed it. 00:07:02.388 request: 00:07:02.388 { 00:07:02.388 "method": "framework_enable_cpumask_locks", 00:07:02.388 "req_id": 1 00:07:02.388 } 00:07:02.388 Got JSON-RPC error response 00:07:02.388 response: 00:07:02.388 { 00:07:02.388 "code": -32603, 00:07:02.388 "message": "Failed to claim CPU core: 2" 00:07:02.388 } 00:07:02.388 17:53:19 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:02.388 17:53:19 -- common/autotest_common.sh@653 -- # es=1 00:07:02.388 17:53:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.388 17:53:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.388 17:53:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.647 17:53:19 -- event/cpu_locks.sh@158 -- # waitforlisten 70363 /var/tmp/spdk.sock 00:07:02.647 17:53:19 -- common/autotest_common.sh@829 -- # '[' -z 70363 ']' 00:07:02.647 17:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.647 17:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.647 17:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.647 17:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.647 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:02.647 17:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.647 17:53:19 -- common/autotest_common.sh@862 -- # return 0 00:07:02.647 17:53:19 -- event/cpu_locks.sh@159 -- # waitforlisten 70381 /var/tmp/spdk2.sock 00:07:02.647 17:53:19 -- common/autotest_common.sh@829 -- # '[' -z 70381 ']' 00:07:02.647 17:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.647 17:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.647 17:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.647 17:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.647 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:02.907 17:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.907 17:53:19 -- common/autotest_common.sh@862 -- # return 0 00:07:02.907 17:53:19 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.907 17:53:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.907 17:53:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.907 17:53:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.907 00:07:02.907 real 0m2.277s 00:07:02.907 user 0m1.011s 00:07:02.907 sys 0m0.203s 00:07:02.907 17:53:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.907 ************************************ 00:07:02.907 END TEST locking_overlapped_coremask_via_rpc 00:07:02.907 ************************************ 00:07:02.907 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:02.907 17:53:19 -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.907 17:53:19 -- event/cpu_locks.sh@15 -- # [[ -z 70363 ]] 00:07:02.907 17:53:19 -- event/cpu_locks.sh@15 -- # killprocess 70363 00:07:02.907 17:53:19 -- common/autotest_common.sh@936 -- # '[' -z 70363 ']' 00:07:02.907 17:53:19 -- common/autotest_common.sh@940 -- # kill -0 70363 00:07:02.907 17:53:19 -- common/autotest_common.sh@941 -- # uname 00:07:02.907 17:53:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.907 17:53:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70363 00:07:02.907 17:53:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.907 killing process with pid 70363 00:07:02.907 17:53:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.907 17:53:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70363' 00:07:02.907 17:53:19 -- common/autotest_common.sh@955 -- # kill 70363 00:07:02.907 17:53:19 -- common/autotest_common.sh@960 -- # wait 70363 00:07:03.475 17:53:20 -- event/cpu_locks.sh@16 -- # [[ -z 70381 ]] 00:07:03.475 17:53:20 -- event/cpu_locks.sh@16 -- # killprocess 70381 00:07:03.475 17:53:20 -- common/autotest_common.sh@936 -- # '[' -z 70381 ']' 00:07:03.475 17:53:20 -- common/autotest_common.sh@940 -- # kill -0 70381 00:07:03.475 17:53:20 -- common/autotest_common.sh@941 -- # uname 00:07:03.475 17:53:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.475 17:53:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70381 00:07:03.475 17:53:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:03.475 17:53:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:03.475 killing process with pid 70381 00:07:03.475 17:53:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70381' 00:07:03.475 17:53:20 -- common/autotest_common.sh@955 -- # kill 70381 00:07:03.475 17:53:20 -- common/autotest_common.sh@960 -- # wait 70381 00:07:03.734 17:53:20 -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.734 17:53:20 -- event/cpu_locks.sh@1 -- # cleanup 00:07:03.734 17:53:20 -- event/cpu_locks.sh@15 -- # [[ -z 70363 ]] 00:07:03.734 17:53:20 -- event/cpu_locks.sh@15 -- # killprocess 70363 00:07:03.734 17:53:20 -- common/autotest_common.sh@936 -- # '[' -z 70363 ']' 00:07:03.734 17:53:20 -- common/autotest_common.sh@940 -- # kill -0 70363 00:07:03.734 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70363) - No such process 00:07:03.734 Process with pid 70363 is not found 00:07:03.734 17:53:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70363 is not found' 00:07:03.734 17:53:20 -- event/cpu_locks.sh@16 -- # [[ -z 70381 ]] 00:07:03.734 17:53:20 -- event/cpu_locks.sh@16 -- # killprocess 70381 00:07:03.734 17:53:20 -- common/autotest_common.sh@936 -- # '[' -z 70381 ']' 00:07:03.734 Process with pid 70381 is not found 00:07:03.734 17:53:20 -- common/autotest_common.sh@940 -- # kill -0 70381 00:07:03.734 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70381) - No such process 00:07:03.734 17:53:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70381 is not found' 00:07:03.734 17:53:20 -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.734 00:07:03.734 real 0m20.155s 00:07:03.734 user 0m32.514s 00:07:03.734 sys 0m6.612s 00:07:03.993 17:53:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.994 ************************************ 00:07:03.994 END TEST cpu_locks 00:07:03.994 ************************************ 00:07:03.994 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.994 00:07:03.994 real 0m47.052s 00:07:03.994 user 1m25.584s 00:07:03.994 sys 0m10.880s 00:07:03.994 17:53:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.994 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.994 ************************************ 00:07:03.994 END TEST event 00:07:03.994 ************************************ 00:07:03.994 17:53:20 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.994 17:53:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.994 17:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.994 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:03.994 ************************************ 00:07:03.994 START TEST thread 00:07:03.994 ************************************ 00:07:03.994 17:53:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.994 * Looking for test storage... 00:07:03.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:03.994 17:53:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:03.994 17:53:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:03.994 17:53:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:04.253 17:53:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:04.253 17:53:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:04.253 17:53:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:04.253 17:53:20 -- scripts/common.sh@335 -- # IFS=.-: 00:07:04.253 17:53:20 -- scripts/common.sh@335 -- # read -ra ver1 00:07:04.253 17:53:20 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.253 17:53:20 -- scripts/common.sh@336 -- # read -ra ver2 00:07:04.253 17:53:20 -- scripts/common.sh@337 -- # local 'op=<' 00:07:04.253 17:53:20 -- scripts/common.sh@339 -- # ver1_l=2 00:07:04.253 17:53:20 -- scripts/common.sh@340 -- # ver2_l=1 00:07:04.253 17:53:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:04.253 17:53:20 -- scripts/common.sh@343 -- # case "$op" in 00:07:04.253 17:53:20 -- scripts/common.sh@344 -- # : 1 00:07:04.253 17:53:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:04.253 17:53:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.253 17:53:20 -- scripts/common.sh@364 -- # decimal 1 00:07:04.253 17:53:20 -- scripts/common.sh@352 -- # local d=1 00:07:04.253 17:53:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.253 17:53:20 -- scripts/common.sh@354 -- # echo 1 00:07:04.253 17:53:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:04.253 17:53:20 -- scripts/common.sh@365 -- # decimal 2 00:07:04.253 17:53:20 -- scripts/common.sh@352 -- # local d=2 00:07:04.253 17:53:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.253 17:53:20 -- scripts/common.sh@354 -- # echo 2 00:07:04.253 17:53:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:04.253 17:53:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:04.253 17:53:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:04.253 17:53:20 -- scripts/common.sh@367 -- # return 0 00:07:04.253 17:53:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:04.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.253 --rc genhtml_branch_coverage=1 00:07:04.253 --rc genhtml_function_coverage=1 00:07:04.253 --rc genhtml_legend=1 00:07:04.253 --rc geninfo_all_blocks=1 00:07:04.253 --rc geninfo_unexecuted_blocks=1 00:07:04.253 00:07:04.253 ' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:04.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.253 --rc genhtml_branch_coverage=1 00:07:04.253 --rc genhtml_function_coverage=1 00:07:04.253 --rc genhtml_legend=1 00:07:04.253 --rc geninfo_all_blocks=1 00:07:04.253 --rc geninfo_unexecuted_blocks=1 00:07:04.253 00:07:04.253 ' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:04.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.253 --rc genhtml_branch_coverage=1 00:07:04.253 --rc genhtml_function_coverage=1 00:07:04.253 --rc genhtml_legend=1 00:07:04.253 --rc geninfo_all_blocks=1 00:07:04.253 --rc geninfo_unexecuted_blocks=1 00:07:04.253 00:07:04.253 ' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:04.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.253 --rc genhtml_branch_coverage=1 00:07:04.253 --rc genhtml_function_coverage=1 00:07:04.253 --rc genhtml_legend=1 00:07:04.253 --rc geninfo_all_blocks=1 00:07:04.253 --rc geninfo_unexecuted_blocks=1 00:07:04.253 00:07:04.253 ' 00:07:04.253 17:53:20 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.253 17:53:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:04.253 17:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.253 17:53:20 -- common/autotest_common.sh@10 -- # set +x 00:07:04.253 ************************************ 00:07:04.253 START TEST thread_poller_perf 00:07:04.253 ************************************ 00:07:04.253 17:53:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.253 [2024-11-26 17:53:21.041048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.253 [2024-11-26 17:53:21.041196] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70507 ] 00:07:04.513 [2024-11-26 17:53:21.191202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.513 [2024-11-26 17:53:21.238228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.513 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:05.451 [2024-11-26T17:53:22.377Z] ====================================== 00:07:05.451 [2024-11-26T17:53:22.377Z] busy:2501224644 (cyc) 00:07:05.451 [2024-11-26T17:53:22.377Z] total_run_count: 371000 00:07:05.451 [2024-11-26T17:53:22.377Z] tsc_hz: 2490000000 (cyc) 00:07:05.451 [2024-11-26T17:53:22.377Z] ====================================== 00:07:05.451 [2024-11-26T17:53:22.377Z] poller_cost: 6741 (cyc), 2707 (nsec) 00:07:05.451 00:07:05.451 real 0m1.337s 00:07:05.451 user 0m1.143s 00:07:05.451 sys 0m0.087s 00:07:05.451 17:53:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.451 17:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:05.451 ************************************ 00:07:05.451 END TEST thread_poller_perf 00:07:05.451 ************************************ 00:07:05.711 17:53:22 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.711 17:53:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:05.711 17:53:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.711 17:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:05.711 ************************************ 00:07:05.711 START TEST thread_poller_perf 00:07:05.711 ************************************ 00:07:05.711 17:53:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.711 [2024-11-26 17:53:22.454201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.711 [2024-11-26 17:53:22.454342] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70544 ] 00:07:05.711 [2024-11-26 17:53:22.604165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.970 [2024-11-26 17:53:22.650784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.970 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.954 [2024-11-26T17:53:23.880Z] ====================================== 00:07:06.954 [2024-11-26T17:53:23.880Z] busy:2495350606 (cyc) 00:07:06.954 [2024-11-26T17:53:23.880Z] total_run_count: 5097000 00:07:06.954 [2024-11-26T17:53:23.880Z] tsc_hz: 2490000000 (cyc) 00:07:06.954 [2024-11-26T17:53:23.880Z] ====================================== 00:07:06.954 [2024-11-26T17:53:23.880Z] poller_cost: 489 (cyc), 196 (nsec) 00:07:06.954 00:07:06.954 real 0m1.328s 00:07:06.954 user 0m1.127s 00:07:06.954 sys 0m0.094s 00:07:06.954 17:53:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.954 ************************************ 00:07:06.954 END TEST thread_poller_perf 00:07:06.954 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:06.954 ************************************ 00:07:06.954 17:53:23 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.954 00:07:06.954 real 0m3.018s 00:07:06.954 user 0m2.411s 00:07:06.954 sys 0m0.398s 00:07:06.954 17:53:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.954 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:06.954 ************************************ 00:07:06.954 END TEST thread 00:07:06.954 ************************************ 00:07:06.954 17:53:23 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:06.954 17:53:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.954 17:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.954 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:06.954 ************************************ 00:07:06.954 START TEST accel 00:07:06.954 ************************************ 00:07:06.954 17:53:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:07.214 * Looking for test storage... 00:07:07.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:07.214 17:53:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:07.214 17:53:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:07.214 17:53:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:07.214 17:53:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:07.214 17:53:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:07.214 17:53:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:07.214 17:53:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:07.214 17:53:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:07.214 17:53:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:07.214 17:53:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.214 17:53:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:07.214 17:53:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:07.214 17:53:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:07.214 17:53:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:07.214 17:53:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:07.214 17:53:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:07.214 17:53:24 -- scripts/common.sh@344 -- # : 1 00:07:07.214 17:53:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:07.214 17:53:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.214 17:53:24 -- scripts/common.sh@364 -- # decimal 1 00:07:07.214 17:53:24 -- scripts/common.sh@352 -- # local d=1 00:07:07.214 17:53:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.214 17:53:24 -- scripts/common.sh@354 -- # echo 1 00:07:07.214 17:53:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:07.214 17:53:24 -- scripts/common.sh@365 -- # decimal 2 00:07:07.214 17:53:24 -- scripts/common.sh@352 -- # local d=2 00:07:07.214 17:53:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.214 17:53:24 -- scripts/common.sh@354 -- # echo 2 00:07:07.214 17:53:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:07.214 17:53:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:07.214 17:53:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:07.214 17:53:24 -- scripts/common.sh@367 -- # return 0 00:07:07.214 17:53:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.214 17:53:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.214 --rc genhtml_branch_coverage=1 00:07:07.214 --rc genhtml_function_coverage=1 00:07:07.214 --rc genhtml_legend=1 00:07:07.214 --rc geninfo_all_blocks=1 00:07:07.214 --rc geninfo_unexecuted_blocks=1 00:07:07.214 00:07:07.214 ' 00:07:07.214 17:53:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.214 --rc genhtml_branch_coverage=1 00:07:07.214 --rc genhtml_function_coverage=1 00:07:07.214 --rc genhtml_legend=1 00:07:07.214 --rc geninfo_all_blocks=1 00:07:07.214 --rc geninfo_unexecuted_blocks=1 00:07:07.214 00:07:07.214 ' 00:07:07.214 17:53:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.214 --rc genhtml_branch_coverage=1 00:07:07.214 --rc genhtml_function_coverage=1 00:07:07.214 --rc genhtml_legend=1 00:07:07.214 --rc geninfo_all_blocks=1 00:07:07.214 --rc geninfo_unexecuted_blocks=1 00:07:07.214 00:07:07.214 ' 00:07:07.214 17:53:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.214 --rc genhtml_branch_coverage=1 00:07:07.214 --rc genhtml_function_coverage=1 00:07:07.214 --rc genhtml_legend=1 00:07:07.214 --rc geninfo_all_blocks=1 00:07:07.214 --rc geninfo_unexecuted_blocks=1 00:07:07.214 00:07:07.214 ' 00:07:07.214 17:53:24 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:07.214 17:53:24 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:07.214 17:53:24 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.214 17:53:24 -- accel/accel.sh@59 -- # spdk_tgt_pid=70626 00:07:07.214 17:53:24 -- accel/accel.sh@60 -- # waitforlisten 70626 00:07:07.214 17:53:24 -- common/autotest_common.sh@829 -- # '[' -z 70626 ']' 00:07:07.214 17:53:24 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:07.214 17:53:24 -- accel/accel.sh@58 -- # build_accel_config 00:07:07.214 17:53:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.214 17:53:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.214 17:53:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.214 17:53:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.214 17:53:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.214 17:53:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.214 17:53:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.214 17:53:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.214 17:53:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.214 17:53:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.214 17:53:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.214 17:53:24 -- accel/accel.sh@42 -- # jq -r . 00:07:07.474 [2024-11-26 17:53:24.199383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.474 [2024-11-26 17:53:24.199544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70626 ] 00:07:07.474 [2024-11-26 17:53:24.350308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.733 [2024-11-26 17:53:24.400324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.733 [2024-11-26 17:53:24.400550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.301 17:53:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.301 17:53:25 -- common/autotest_common.sh@862 -- # return 0 00:07:08.301 17:53:25 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:08.301 17:53:25 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:08.301 17:53:25 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:08.301 17:53:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.301 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:08.301 17:53:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # IFS== 00:07:08.301 17:53:25 -- accel/accel.sh@64 -- # read -r opc module 00:07:08.301 17:53:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:08.301 17:53:25 -- accel/accel.sh@67 -- # killprocess 70626 00:07:08.301 17:53:25 -- common/autotest_common.sh@936 -- # '[' -z 70626 ']' 00:07:08.301 17:53:25 -- common/autotest_common.sh@940 -- # kill -0 70626 00:07:08.301 17:53:25 -- common/autotest_common.sh@941 -- # uname 00:07:08.301 17:53:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.301 17:53:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70626 00:07:08.301 17:53:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.301 killing process with pid 70626 00:07:08.301 17:53:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.301 17:53:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70626' 00:07:08.301 17:53:25 -- common/autotest_common.sh@955 -- # kill 70626 00:07:08.301 17:53:25 -- common/autotest_common.sh@960 -- # wait 70626 00:07:08.869 17:53:25 -- accel/accel.sh@68 -- # trap - ERR 00:07:08.869 17:53:25 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:08.869 17:53:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:08.869 17:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.869 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:08.869 17:53:25 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:08.869 17:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:08.869 17:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.869 17:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.869 17:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.869 17:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.869 17:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:08.869 17:53:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.869 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:08.869 17:53:25 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:08.869 17:53:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:08.869 17:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.869 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:08.869 ************************************ 00:07:08.869 START TEST accel_missing_filename 00:07:08.869 ************************************ 00:07:08.869 17:53:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:08.869 17:53:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:08.869 17:53:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:08.869 17:53:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:08.869 17:53:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.869 17:53:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:08.869 17:53:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.869 17:53:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:08.869 17:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:08.869 17:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.869 17:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.869 17:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.869 17:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.869 17:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.869 17:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:08.869 [2024-11-26 17:53:25.708852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.869 [2024-11-26 17:53:25.708992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70684 ] 00:07:09.128 [2024-11-26 17:53:25.861573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.128 [2024-11-26 17:53:25.911251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.128 [2024-11-26 17:53:25.958031] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.128 [2024-11-26 17:53:26.030699] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:09.386 A filename is required. 00:07:09.386 17:53:26 -- common/autotest_common.sh@653 -- # es=234 00:07:09.386 17:53:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.386 17:53:26 -- common/autotest_common.sh@662 -- # es=106 00:07:09.386 17:53:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.386 17:53:26 -- common/autotest_common.sh@670 -- # es=1 00:07:09.386 17:53:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.386 00:07:09.386 real 0m0.475s 00:07:09.386 user 0m0.264s 00:07:09.386 sys 0m0.145s 00:07:09.386 17:53:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.386 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 ************************************ 00:07:09.386 END TEST accel_missing_filename 00:07:09.386 ************************************ 00:07:09.386 17:53:26 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.386 17:53:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:09.386 17:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.386 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.386 ************************************ 00:07:09.386 START TEST accel_compress_verify 00:07:09.386 ************************************ 00:07:09.386 17:53:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.386 17:53:26 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.386 17:53:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.386 17:53:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:09.386 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.386 17:53:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:09.386 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.386 17:53:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.386 17:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.386 17:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.386 17:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.386 17:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.386 17:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.386 17:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.386 17:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.386 17:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.386 17:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:09.386 [2024-11-26 17:53:26.238138] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:09.386 [2024-11-26 17:53:26.238276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:07:09.644 [2024-11-26 17:53:26.392294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.644 [2024-11-26 17:53:26.455940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.644 [2024-11-26 17:53:26.501592] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.902 [2024-11-26 17:53:26.575621] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:09.902 00:07:09.902 Compression does not support the verify option, aborting. 00:07:09.902 17:53:26 -- common/autotest_common.sh@653 -- # es=161 00:07:09.902 17:53:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.902 17:53:26 -- common/autotest_common.sh@662 -- # es=33 00:07:09.902 17:53:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.902 17:53:26 -- common/autotest_common.sh@670 -- # es=1 00:07:09.903 17:53:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.903 00:07:09.903 real 0m0.480s 00:07:09.903 user 0m0.278s 00:07:09.903 sys 0m0.140s 00:07:09.903 17:53:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.903 ************************************ 00:07:09.903 END TEST accel_compress_verify 00:07:09.903 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.903 ************************************ 00:07:09.903 17:53:26 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:09.903 17:53:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.903 17:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.903 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.903 ************************************ 00:07:09.903 START TEST accel_wrong_workload 00:07:09.903 ************************************ 00:07:09.903 17:53:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:09.903 17:53:26 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.903 17:53:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:09.903 17:53:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:09.903 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.903 17:53:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:09.903 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.903 17:53:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:09.903 17:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:09.903 17:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.903 17:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.903 17:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.903 17:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.903 17:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.903 17:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.903 17:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.903 17:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:09.903 Unsupported workload type: foobar 00:07:09.903 [2024-11-26 17:53:26.790393] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:09.903 accel_perf options: 00:07:09.903 [-h help message] 00:07:09.903 [-q queue depth per core] 00:07:09.903 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:09.903 [-T number of threads per core 00:07:09.903 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:09.903 [-t time in seconds] 00:07:09.903 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:09.903 [ dif_verify, , dif_generate, dif_generate_copy 00:07:09.903 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:09.903 [-l for compress/decompress workloads, name of uncompressed input file 00:07:09.903 [-S for crc32c workload, use this seed value (default 0) 00:07:09.903 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:09.903 [-f for fill workload, use this BYTE value (default 255) 00:07:09.903 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:09.903 [-y verify result if this switch is on] 00:07:09.903 [-a tasks to allocate per core (default: same value as -q)] 00:07:09.903 Can be used to spread operations across a wider range of memory. 00:07:09.903 17:53:26 -- common/autotest_common.sh@653 -- # es=1 00:07:09.903 17:53:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.903 17:53:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.903 17:53:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.903 00:07:09.903 real 0m0.065s 00:07:09.903 user 0m0.071s 00:07:09.903 sys 0m0.035s 00:07:09.903 17:53:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.903 ************************************ 00:07:09.903 END TEST accel_wrong_workload 00:07:09.903 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.903 ************************************ 00:07:10.162 17:53:26 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.162 17:53:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:10.162 17:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.162 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.162 ************************************ 00:07:10.162 START TEST accel_negative_buffers 00:07:10.162 ************************************ 00:07:10.162 17:53:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.162 17:53:26 -- common/autotest_common.sh@650 -- # local es=0 00:07:10.162 17:53:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:10.162 17:53:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:10.162 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.162 17:53:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:10.162 17:53:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.162 17:53:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:10.162 17:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:10.162 17:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.162 17:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.162 17:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.162 17:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.162 17:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.162 17:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.162 17:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.162 17:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:10.162 -x option must be non-negative. 00:07:10.162 [2024-11-26 17:53:26.921624] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:10.162 accel_perf options: 00:07:10.162 [-h help message] 00:07:10.162 [-q queue depth per core] 00:07:10.162 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.162 [-T number of threads per core 00:07:10.162 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.162 [-t time in seconds] 00:07:10.162 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.162 [ dif_verify, , dif_generate, dif_generate_copy 00:07:10.162 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.162 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.162 [-S for crc32c workload, use this seed value (default 0) 00:07:10.162 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.162 [-f for fill workload, use this BYTE value (default 255) 00:07:10.162 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.162 [-y verify result if this switch is on] 00:07:10.162 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.162 Can be used to spread operations across a wider range of memory. 00:07:10.162 17:53:26 -- common/autotest_common.sh@653 -- # es=1 00:07:10.162 17:53:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.162 17:53:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.162 17:53:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.162 00:07:10.162 real 0m0.067s 00:07:10.162 user 0m0.066s 00:07:10.162 sys 0m0.043s 00:07:10.162 17:53:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.162 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.162 ************************************ 00:07:10.162 END TEST accel_negative_buffers 00:07:10.162 ************************************ 00:07:10.163 17:53:26 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:10.163 17:53:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:10.163 17:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.163 17:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.163 ************************************ 00:07:10.163 START TEST accel_crc32c 00:07:10.163 ************************************ 00:07:10.163 17:53:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:10.163 17:53:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.163 17:53:27 -- accel/accel.sh@17 -- # local accel_module 00:07:10.163 17:53:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:10.163 17:53:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:10.163 17:53:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.163 17:53:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.163 17:53:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.163 17:53:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.163 17:53:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.163 17:53:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.163 17:53:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.163 17:53:27 -- accel/accel.sh@42 -- # jq -r . 00:07:10.163 [2024-11-26 17:53:27.062537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.163 [2024-11-26 17:53:27.062901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70772 ] 00:07:10.421 [2024-11-26 17:53:27.214291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.421 [2024-11-26 17:53:27.263462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.796 17:53:28 -- accel/accel.sh@18 -- # out=' 00:07:11.796 SPDK Configuration: 00:07:11.796 Core mask: 0x1 00:07:11.796 00:07:11.796 Accel Perf Configuration: 00:07:11.796 Workload Type: crc32c 00:07:11.796 CRC-32C seed: 32 00:07:11.796 Transfer size: 4096 bytes 00:07:11.796 Vector count 1 00:07:11.796 Module: software 00:07:11.796 Queue depth: 32 00:07:11.796 Allocate depth: 32 00:07:11.796 # threads/core: 1 00:07:11.796 Run time: 1 seconds 00:07:11.796 Verify: Yes 00:07:11.796 00:07:11.796 Running for 1 seconds... 00:07:11.796 00:07:11.796 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.796 ------------------------------------------------------------------------------------ 00:07:11.796 0,0 505824/s 1975 MiB/s 0 0 00:07:11.796 ==================================================================================== 00:07:11.796 Total 505824/s 1975 MiB/s 0 0' 00:07:11.796 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.796 17:53:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:11.796 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.796 17:53:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:11.796 17:53:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.796 17:53:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.796 17:53:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.796 17:53:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.796 17:53:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.796 17:53:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.796 17:53:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.796 17:53:28 -- accel/accel.sh@42 -- # jq -r . 00:07:11.796 [2024-11-26 17:53:28.527882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.796 [2024-11-26 17:53:28.528247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70793 ] 00:07:11.796 [2024-11-26 17:53:28.678118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.055 [2024-11-26 17:53:28.726015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=0x1 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=crc32c 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=32 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=software 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=32 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=32 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=1 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val=Yes 00:07:12.055 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.055 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.055 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.056 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.056 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.056 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.056 17:53:28 -- accel/accel.sh@21 -- # val= 00:07:12.056 17:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.056 17:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.056 17:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@21 -- # val= 00:07:13.434 17:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.434 17:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.434 17:53:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.434 17:53:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:13.434 ************************************ 00:07:13.434 END TEST accel_crc32c 00:07:13.434 ************************************ 00:07:13.434 17:53:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.434 00:07:13.434 real 0m2.923s 00:07:13.434 user 0m2.428s 00:07:13.434 sys 0m0.295s 00:07:13.434 17:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.434 17:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.434 17:53:29 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:13.434 17:53:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:13.434 17:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.434 17:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.434 ************************************ 00:07:13.434 START TEST accel_crc32c_C2 00:07:13.434 ************************************ 00:07:13.434 17:53:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:13.434 17:53:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.434 17:53:30 -- accel/accel.sh@17 -- # local accel_module 00:07:13.434 17:53:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:13.434 17:53:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:13.434 17:53:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.434 17:53:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.434 17:53:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.434 17:53:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.434 17:53:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.434 17:53:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.434 17:53:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.434 17:53:30 -- accel/accel.sh@42 -- # jq -r . 00:07:13.434 [2024-11-26 17:53:30.050828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.434 [2024-11-26 17:53:30.050955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70828 ] 00:07:13.434 [2024-11-26 17:53:30.199391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.434 [2024-11-26 17:53:30.247195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.812 17:53:31 -- accel/accel.sh@18 -- # out=' 00:07:14.812 SPDK Configuration: 00:07:14.812 Core mask: 0x1 00:07:14.812 00:07:14.812 Accel Perf Configuration: 00:07:14.812 Workload Type: crc32c 00:07:14.812 CRC-32C seed: 0 00:07:14.812 Transfer size: 4096 bytes 00:07:14.812 Vector count 2 00:07:14.812 Module: software 00:07:14.812 Queue depth: 32 00:07:14.812 Allocate depth: 32 00:07:14.812 # threads/core: 1 00:07:14.812 Run time: 1 seconds 00:07:14.812 Verify: Yes 00:07:14.812 00:07:14.812 Running for 1 seconds... 00:07:14.812 00:07:14.812 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.812 ------------------------------------------------------------------------------------ 00:07:14.812 0,0 406080/s 3172 MiB/s 0 0 00:07:14.812 ==================================================================================== 00:07:14.812 Total 406080/s 1586 MiB/s 0 0' 00:07:14.812 17:53:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:14.812 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:14.812 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:14.812 17:53:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:14.812 17:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.812 17:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.812 17:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.812 17:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.812 17:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.812 17:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.812 17:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.812 17:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:14.812 [2024-11-26 17:53:31.507330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.812 [2024-11-26 17:53:31.507741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:07:14.812 [2024-11-26 17:53:31.659273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.812 [2024-11-26 17:53:31.708804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=0x1 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=crc32c 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=0 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=software 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=32 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=32 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.071 17:53:31 -- accel/accel.sh@21 -- # val=1 00:07:15.071 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.071 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.072 17:53:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.072 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.072 17:53:31 -- accel/accel.sh@21 -- # val=Yes 00:07:15.072 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.072 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.072 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.072 17:53:31 -- accel/accel.sh@21 -- # val= 00:07:15.072 17:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.072 17:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@21 -- # val= 00:07:16.009 17:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.009 17:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.009 17:53:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.009 17:53:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:16.009 17:53:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.009 00:07:16.009 real 0m2.923s 00:07:16.009 user 0m2.428s 00:07:16.009 sys 0m0.299s 00:07:16.009 17:53:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.009 17:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.009 ************************************ 00:07:16.009 END TEST accel_crc32c_C2 00:07:16.009 ************************************ 00:07:16.268 17:53:32 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:16.268 17:53:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.268 17:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.268 17:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 ************************************ 00:07:16.268 START TEST accel_copy 00:07:16.268 ************************************ 00:07:16.268 17:53:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:16.268 17:53:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.268 17:53:32 -- accel/accel.sh@17 -- # local accel_module 00:07:16.268 17:53:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:16.268 17:53:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:16.268 17:53:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.268 17:53:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.268 17:53:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.268 17:53:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.268 17:53:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.268 17:53:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.268 17:53:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.268 17:53:33 -- accel/accel.sh@42 -- # jq -r . 00:07:16.268 [2024-11-26 17:53:33.043603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.268 [2024-11-26 17:53:33.044362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:07:16.542 [2024-11-26 17:53:33.211258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.542 [2024-11-26 17:53:33.259771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.920 17:53:34 -- accel/accel.sh@18 -- # out=' 00:07:17.920 SPDK Configuration: 00:07:17.920 Core mask: 0x1 00:07:17.920 00:07:17.920 Accel Perf Configuration: 00:07:17.920 Workload Type: copy 00:07:17.920 Transfer size: 4096 bytes 00:07:17.920 Vector count 1 00:07:17.920 Module: software 00:07:17.920 Queue depth: 32 00:07:17.920 Allocate depth: 32 00:07:17.920 # threads/core: 1 00:07:17.920 Run time: 1 seconds 00:07:17.920 Verify: Yes 00:07:17.920 00:07:17.920 Running for 1 seconds... 00:07:17.920 00:07:17.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.920 ------------------------------------------------------------------------------------ 00:07:17.920 0,0 340384/s 1329 MiB/s 0 0 00:07:17.920 ==================================================================================== 00:07:17.920 Total 340384/s 1329 MiB/s 0 0' 00:07:17.920 17:53:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:17.920 17:53:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.920 17:53:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.920 17:53:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.920 17:53:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.920 17:53:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.920 17:53:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.920 17:53:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.920 17:53:34 -- accel/accel.sh@42 -- # jq -r . 00:07:17.920 [2024-11-26 17:53:34.512836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.920 [2024-11-26 17:53:34.513218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70899 ] 00:07:17.920 [2024-11-26 17:53:34.659957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.920 [2024-11-26 17:53:34.707768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=0x1 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=copy 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=software 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=32 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=32 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=1 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val=Yes 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.920 17:53:34 -- accel/accel.sh@21 -- # val= 00:07:17.920 17:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.920 17:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@21 -- # val= 00:07:19.300 17:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:19.300 17:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:19.300 17:53:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.300 17:53:36 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:19.300 17:53:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.300 00:07:19.300 real 0m3.024s 00:07:19.300 user 0m2.524s 00:07:19.300 sys 0m0.297s 00:07:19.300 17:53:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.300 17:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 ************************************ 00:07:19.300 END TEST accel_copy 00:07:19.300 ************************************ 00:07:19.300 17:53:36 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.300 17:53:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:19.300 17:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.300 17:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:19.300 ************************************ 00:07:19.300 START TEST accel_fill 00:07:19.300 ************************************ 00:07:19.300 17:53:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.300 17:53:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.300 17:53:36 -- accel/accel.sh@17 -- # local accel_module 00:07:19.300 17:53:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.300 17:53:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.300 17:53:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.300 17:53:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.300 17:53:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.300 17:53:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.300 17:53:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.300 17:53:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.300 17:53:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.300 17:53:36 -- accel/accel.sh@42 -- # jq -r . 00:07:19.300 [2024-11-26 17:53:36.126783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.300 [2024-11-26 17:53:36.127151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:07:19.561 [2024-11-26 17:53:36.279168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.561 [2024-11-26 17:53:36.358133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.940 17:53:37 -- accel/accel.sh@18 -- # out=' 00:07:20.940 SPDK Configuration: 00:07:20.940 Core mask: 0x1 00:07:20.940 00:07:20.940 Accel Perf Configuration: 00:07:20.940 Workload Type: fill 00:07:20.940 Fill pattern: 0x80 00:07:20.940 Transfer size: 4096 bytes 00:07:20.940 Vector count 1 00:07:20.940 Module: software 00:07:20.940 Queue depth: 64 00:07:20.940 Allocate depth: 64 00:07:20.940 # threads/core: 1 00:07:20.940 Run time: 1 seconds 00:07:20.940 Verify: Yes 00:07:20.940 00:07:20.940 Running for 1 seconds... 00:07:20.940 00:07:20.940 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.940 ------------------------------------------------------------------------------------ 00:07:20.940 0,0 557312/s 2177 MiB/s 0 0 00:07:20.940 ==================================================================================== 00:07:20.940 Total 557312/s 2177 MiB/s 0 0' 00:07:20.940 17:53:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.940 17:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.940 17:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.940 17:53:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.940 17:53:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.940 17:53:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.940 17:53:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.940 17:53:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.940 17:53:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.940 17:53:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.940 17:53:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.940 17:53:37 -- accel/accel.sh@42 -- # jq -r . 00:07:20.940 [2024-11-26 17:53:37.744175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.940 [2024-11-26 17:53:37.744685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:07:21.200 [2024-11-26 17:53:37.911011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.200 [2024-11-26 17:53:37.979781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.200 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.200 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.200 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.200 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.200 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.200 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=0x1 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=fill 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=0x80 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=software 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=64 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=64 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=1 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val=Yes 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.201 17:53:38 -- accel/accel.sh@21 -- # val= 00:07:21.201 17:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.201 17:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@21 -- # val= 00:07:22.578 17:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:22.578 17:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:22.578 17:53:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.578 17:53:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:22.578 ************************************ 00:07:22.578 END TEST accel_fill 00:07:22.578 ************************************ 00:07:22.578 17:53:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.578 00:07:22.578 real 0m3.150s 00:07:22.578 user 0m2.541s 00:07:22.578 sys 0m0.405s 00:07:22.578 17:53:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.578 17:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.578 17:53:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:22.578 17:53:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:22.578 17:53:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.578 17:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.578 ************************************ 00:07:22.578 START TEST accel_copy_crc32c 00:07:22.578 ************************************ 00:07:22.578 17:53:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:22.578 17:53:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.578 17:53:39 -- accel/accel.sh@17 -- # local accel_module 00:07:22.578 17:53:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:22.578 17:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.578 17:53:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:22.578 17:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.578 17:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.578 17:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.578 17:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.578 17:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.578 17:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.578 17:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:22.578 [2024-11-26 17:53:39.334272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.578 [2024-11-26 17:53:39.334440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70994 ] 00:07:22.578 [2024-11-26 17:53:39.486293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.837 [2024-11-26 17:53:39.534387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.214 17:53:40 -- accel/accel.sh@18 -- # out=' 00:07:24.214 SPDK Configuration: 00:07:24.214 Core mask: 0x1 00:07:24.214 00:07:24.214 Accel Perf Configuration: 00:07:24.214 Workload Type: copy_crc32c 00:07:24.214 CRC-32C seed: 0 00:07:24.214 Vector size: 4096 bytes 00:07:24.214 Transfer size: 4096 bytes 00:07:24.214 Vector count 1 00:07:24.214 Module: software 00:07:24.214 Queue depth: 32 00:07:24.214 Allocate depth: 32 00:07:24.214 # threads/core: 1 00:07:24.214 Run time: 1 seconds 00:07:24.214 Verify: Yes 00:07:24.214 00:07:24.214 Running for 1 seconds... 00:07:24.214 00:07:24.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.214 ------------------------------------------------------------------------------------ 00:07:24.214 0,0 255776/s 999 MiB/s 0 0 00:07:24.214 ==================================================================================== 00:07:24.214 Total 255776/s 999 MiB/s 0 0' 00:07:24.215 17:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:24.215 17:53:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:24.215 17:53:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.215 17:53:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.215 17:53:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.215 17:53:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.215 17:53:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.215 17:53:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.215 17:53:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.215 17:53:40 -- accel/accel.sh@42 -- # jq -r . 00:07:24.215 [2024-11-26 17:53:40.794897] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.215 [2024-11-26 17:53:40.795043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71020 ] 00:07:24.215 [2024-11-26 17:53:40.947183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.215 [2024-11-26 17:53:40.995573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=0x1 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=0 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=software 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=32 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=32 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=1 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val=Yes 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.215 17:53:41 -- accel/accel.sh@21 -- # val= 00:07:24.215 17:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.215 17:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 17:53:42 -- accel/accel.sh@21 -- # val= 00:07:25.593 17:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:25.593 17:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:25.593 ************************************ 00:07:25.593 END TEST accel_copy_crc32c 00:07:25.593 ************************************ 00:07:25.593 17:53:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.593 17:53:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:25.593 17:53:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.593 00:07:25.593 real 0m2.923s 00:07:25.593 user 0m2.430s 00:07:25.593 sys 0m0.297s 00:07:25.593 17:53:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.593 17:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 17:53:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.593 17:53:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:25.593 17:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.593 17:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 ************************************ 00:07:25.593 START TEST accel_copy_crc32c_C2 00:07:25.593 ************************************ 00:07:25.593 17:53:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.593 17:53:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.593 17:53:42 -- accel/accel.sh@17 -- # local accel_module 00:07:25.593 17:53:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:25.593 17:53:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:25.593 17:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.593 17:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.593 17:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.593 17:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.593 17:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.593 17:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.593 17:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.593 17:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:25.593 [2024-11-26 17:53:42.328518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.593 [2024-11-26 17:53:42.328681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:07:25.593 [2024-11-26 17:53:42.480070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.852 [2024-11-26 17:53:42.528131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.253 17:53:43 -- accel/accel.sh@18 -- # out=' 00:07:27.253 SPDK Configuration: 00:07:27.253 Core mask: 0x1 00:07:27.253 00:07:27.253 Accel Perf Configuration: 00:07:27.253 Workload Type: copy_crc32c 00:07:27.253 CRC-32C seed: 0 00:07:27.253 Vector size: 4096 bytes 00:07:27.253 Transfer size: 8192 bytes 00:07:27.253 Vector count 2 00:07:27.253 Module: software 00:07:27.253 Queue depth: 32 00:07:27.253 Allocate depth: 32 00:07:27.253 # threads/core: 1 00:07:27.253 Run time: 1 seconds 00:07:27.253 Verify: Yes 00:07:27.253 00:07:27.253 Running for 1 seconds... 00:07:27.253 00:07:27.253 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.253 ------------------------------------------------------------------------------------ 00:07:27.253 0,0 193824/s 1514 MiB/s 0 0 00:07:27.253 ==================================================================================== 00:07:27.253 Total 193824/s 757 MiB/s 0 0' 00:07:27.253 17:53:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:27.253 17:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:27.253 17:53:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.253 17:53:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.253 17:53:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.253 17:53:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.253 17:53:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.253 17:53:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.253 17:53:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.253 17:53:43 -- accel/accel.sh@42 -- # jq -r . 00:07:27.253 [2024-11-26 17:53:43.792317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.253 [2024-11-26 17:53:43.792891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:07:27.253 [2024-11-26 17:53:43.955531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.253 [2024-11-26 17:53:44.003642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=0x1 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=0 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=software 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=32 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=32 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=1 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val=Yes 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.253 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.253 17:53:44 -- accel/accel.sh@21 -- # val= 00:07:27.253 17:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.254 17:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.254 17:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@21 -- # val= 00:07:28.678 17:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.678 17:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.678 17:53:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.678 ************************************ 00:07:28.678 END TEST accel_copy_crc32c_C2 00:07:28.678 ************************************ 00:07:28.678 17:53:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:28.678 17:53:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.678 00:07:28.678 real 0m3.042s 00:07:28.678 user 0m2.526s 00:07:28.678 sys 0m0.315s 00:07:28.678 17:53:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.678 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 17:53:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:28.678 17:53:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.678 17:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.678 17:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 ************************************ 00:07:28.678 START TEST accel_dualcast 00:07:28.678 ************************************ 00:07:28.678 17:53:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:28.678 17:53:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.678 17:53:45 -- accel/accel.sh@17 -- # local accel_module 00:07:28.678 17:53:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:28.678 17:53:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:28.678 17:53:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.678 17:53:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.678 17:53:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.678 17:53:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.678 17:53:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.678 17:53:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.678 17:53:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.678 17:53:45 -- accel/accel.sh@42 -- # jq -r . 00:07:28.678 [2024-11-26 17:53:45.440576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.678 [2024-11-26 17:53:45.440951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71106 ] 00:07:28.678 [2024-11-26 17:53:45.593726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.936 [2024-11-26 17:53:45.671575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.397 17:53:47 -- accel/accel.sh@18 -- # out=' 00:07:30.397 SPDK Configuration: 00:07:30.397 Core mask: 0x1 00:07:30.397 00:07:30.397 Accel Perf Configuration: 00:07:30.397 Workload Type: dualcast 00:07:30.397 Transfer size: 4096 bytes 00:07:30.397 Vector count 1 00:07:30.397 Module: software 00:07:30.397 Queue depth: 32 00:07:30.397 Allocate depth: 32 00:07:30.397 # threads/core: 1 00:07:30.397 Run time: 1 seconds 00:07:30.397 Verify: Yes 00:07:30.397 00:07:30.397 Running for 1 seconds... 00:07:30.397 00:07:30.397 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.397 ------------------------------------------------------------------------------------ 00:07:30.397 0,0 400256/s 1563 MiB/s 0 0 00:07:30.397 ==================================================================================== 00:07:30.397 Total 400256/s 1563 MiB/s 0 0' 00:07:30.397 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.398 17:53:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:30.398 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.398 17:53:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:30.398 17:53:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.398 17:53:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.398 17:53:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.398 17:53:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.398 17:53:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.398 17:53:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.398 17:53:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.398 17:53:47 -- accel/accel.sh@42 -- # jq -r . 00:07:30.398 [2024-11-26 17:53:47.068280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.398 [2024-11-26 17:53:47.068733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:07:30.398 [2024-11-26 17:53:47.219900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.398 [2024-11-26 17:53:47.298704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val=0x1 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val=dualcast 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.657 17:53:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.657 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.657 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val=software 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val=32 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val=32 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val=1 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val=Yes 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 17:53:47 -- accel/accel.sh@21 -- # val= 00:07:30.658 17:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:30.658 17:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@21 -- # val= 00:07:32.034 17:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.034 17:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.034 17:53:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.034 17:53:48 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:32.034 17:53:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.034 00:07:32.034 real 0m3.261s 00:07:32.034 user 0m2.657s 00:07:32.034 sys 0m0.408s 00:07:32.034 ************************************ 00:07:32.034 END TEST accel_dualcast 00:07:32.034 ************************************ 00:07:32.034 17:53:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.034 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.034 17:53:48 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:32.034 17:53:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.034 17:53:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.034 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.034 ************************************ 00:07:32.034 START TEST accel_compare 00:07:32.034 ************************************ 00:07:32.034 17:53:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:32.034 17:53:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.034 17:53:48 -- accel/accel.sh@17 -- # local accel_module 00:07:32.034 17:53:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:32.034 17:53:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:32.034 17:53:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.034 17:53:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.034 17:53:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.034 17:53:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.034 17:53:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.034 17:53:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.034 17:53:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.034 17:53:48 -- accel/accel.sh@42 -- # jq -r . 00:07:32.034 [2024-11-26 17:53:48.771110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.034 [2024-11-26 17:53:48.771612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71162 ] 00:07:32.034 [2024-11-26 17:53:48.923399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.293 [2024-11-26 17:53:48.994005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.669 17:53:50 -- accel/accel.sh@18 -- # out=' 00:07:33.669 SPDK Configuration: 00:07:33.669 Core mask: 0x1 00:07:33.669 00:07:33.669 Accel Perf Configuration: 00:07:33.669 Workload Type: compare 00:07:33.669 Transfer size: 4096 bytes 00:07:33.669 Vector count 1 00:07:33.669 Module: software 00:07:33.669 Queue depth: 32 00:07:33.669 Allocate depth: 32 00:07:33.669 # threads/core: 1 00:07:33.669 Run time: 1 seconds 00:07:33.669 Verify: Yes 00:07:33.669 00:07:33.669 Running for 1 seconds... 00:07:33.669 00:07:33.669 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.669 ------------------------------------------------------------------------------------ 00:07:33.669 0,0 525536/s 2052 MiB/s 0 0 00:07:33.669 ==================================================================================== 00:07:33.669 Total 525536/s 2052 MiB/s 0 0' 00:07:33.669 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 17:53:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:33.669 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 17:53:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:33.669 17:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.669 17:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.669 17:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.669 17:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.669 17:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.669 17:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.669 17:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.669 17:53:50 -- accel/accel.sh@42 -- # jq -r . 00:07:33.669 [2024-11-26 17:53:50.378635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.669 [2024-11-26 17:53:50.378988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71188 ] 00:07:33.669 [2024-11-26 17:53:50.530417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.929 [2024-11-26 17:53:50.603220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=0x1 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=compare 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=software 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=32 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=32 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=1 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val=Yes 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:33.929 17:53:50 -- accel/accel.sh@21 -- # val= 00:07:33.929 17:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:33.929 17:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@21 -- # val= 00:07:35.306 17:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:35.306 17:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:35.306 17:53:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.306 17:53:51 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:35.306 17:53:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.306 00:07:35.306 real 0m3.233s 00:07:35.306 user 0m2.635s 00:07:35.306 sys 0m0.402s 00:07:35.306 17:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.306 ************************************ 00:07:35.306 END TEST accel_compare 00:07:35.306 ************************************ 00:07:35.306 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 17:53:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:35.306 17:53:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.306 17:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.306 17:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:35.306 ************************************ 00:07:35.306 START TEST accel_xor 00:07:35.306 ************************************ 00:07:35.306 17:53:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:35.306 17:53:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.306 17:53:52 -- accel/accel.sh@17 -- # local accel_module 00:07:35.306 17:53:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:35.306 17:53:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.306 17:53:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:35.306 17:53:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.306 17:53:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.306 17:53:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.306 17:53:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.306 17:53:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.306 17:53:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.306 17:53:52 -- accel/accel.sh@42 -- # jq -r . 00:07:35.306 [2024-11-26 17:53:52.067103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.306 [2024-11-26 17:53:52.067500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71224 ] 00:07:35.306 [2024-11-26 17:53:52.219221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.643 [2024-11-26 17:53:52.289048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.044 17:53:53 -- accel/accel.sh@18 -- # out=' 00:07:37.044 SPDK Configuration: 00:07:37.044 Core mask: 0x1 00:07:37.044 00:07:37.044 Accel Perf Configuration: 00:07:37.044 Workload Type: xor 00:07:37.044 Source buffers: 2 00:07:37.044 Transfer size: 4096 bytes 00:07:37.044 Vector count 1 00:07:37.044 Module: software 00:07:37.044 Queue depth: 32 00:07:37.044 Allocate depth: 32 00:07:37.044 # threads/core: 1 00:07:37.044 Run time: 1 seconds 00:07:37.044 Verify: Yes 00:07:37.044 00:07:37.044 Running for 1 seconds... 00:07:37.044 00:07:37.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.044 ------------------------------------------------------------------------------------ 00:07:37.044 0,0 323616/s 1264 MiB/s 0 0 00:07:37.044 ==================================================================================== 00:07:37.044 Total 323616/s 1264 MiB/s 0 0' 00:07:37.044 17:53:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:37.044 17:53:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.044 17:53:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.044 17:53:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.044 17:53:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.044 17:53:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.044 17:53:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.044 17:53:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.044 17:53:53 -- accel/accel.sh@42 -- # jq -r . 00:07:37.044 [2024-11-26 17:53:53.597055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.044 [2024-11-26 17:53:53.597225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71244 ] 00:07:37.044 [2024-11-26 17:53:53.751825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.044 [2024-11-26 17:53:53.800982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val=0x1 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val=xor 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val=2 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val=software 00:07:37.044 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.044 17:53:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.044 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.044 17:53:53 -- accel/accel.sh@21 -- # val=32 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val=32 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val=1 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val=Yes 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.045 17:53:53 -- accel/accel.sh@21 -- # val= 00:07:37.045 17:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.045 17:53:53 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@21 -- # val= 00:07:38.423 17:53:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.423 17:53:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.423 17:53:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.423 17:53:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:38.423 ************************************ 00:07:38.423 END TEST accel_xor 00:07:38.423 ************************************ 00:07:38.423 17:53:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.423 00:07:38.423 real 0m3.007s 00:07:38.423 user 0m2.428s 00:07:38.423 sys 0m0.378s 00:07:38.423 17:53:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.423 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.423 17:53:55 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:38.423 17:53:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:38.423 17:53:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.423 17:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.423 ************************************ 00:07:38.423 START TEST accel_xor 00:07:38.423 ************************************ 00:07:38.423 17:53:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:38.423 17:53:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.423 17:53:55 -- accel/accel.sh@17 -- # local accel_module 00:07:38.423 17:53:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:38.423 17:53:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:38.423 17:53:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.423 17:53:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.423 17:53:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.423 17:53:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.423 17:53:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.423 17:53:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.423 17:53:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.423 17:53:55 -- accel/accel.sh@42 -- # jq -r . 00:07:38.423 [2024-11-26 17:53:55.128965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.423 [2024-11-26 17:53:55.129531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71280 ] 00:07:38.423 [2024-11-26 17:53:55.292916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.423 [2024-11-26 17:53:55.341702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.800 17:53:56 -- accel/accel.sh@18 -- # out=' 00:07:39.800 SPDK Configuration: 00:07:39.800 Core mask: 0x1 00:07:39.800 00:07:39.800 Accel Perf Configuration: 00:07:39.800 Workload Type: xor 00:07:39.800 Source buffers: 3 00:07:39.800 Transfer size: 4096 bytes 00:07:39.800 Vector count 1 00:07:39.800 Module: software 00:07:39.800 Queue depth: 32 00:07:39.800 Allocate depth: 32 00:07:39.800 # threads/core: 1 00:07:39.800 Run time: 1 seconds 00:07:39.800 Verify: Yes 00:07:39.800 00:07:39.800 Running for 1 seconds... 00:07:39.800 00:07:39.800 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.800 ------------------------------------------------------------------------------------ 00:07:39.800 0,0 336096/s 1312 MiB/s 0 0 00:07:39.800 ==================================================================================== 00:07:39.800 Total 336096/s 1312 MiB/s 0 0' 00:07:39.800 17:53:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:39.800 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:39.800 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:39.800 17:53:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:39.800 17:53:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.800 17:53:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.800 17:53:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.800 17:53:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.800 17:53:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.800 17:53:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.800 17:53:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.800 17:53:56 -- accel/accel.sh@42 -- # jq -r . 00:07:39.800 [2024-11-26 17:53:56.608695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.800 [2024-11-26 17:53:56.608849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71300 ] 00:07:40.058 [2024-11-26 17:53:56.760906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.058 [2024-11-26 17:53:56.808317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val=0x1 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val=xor 00:07:40.058 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.058 17:53:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.058 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.058 17:53:56 -- accel/accel.sh@21 -- # val=3 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val=software 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val=32 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val=32 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val=1 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val=Yes 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 17:53:56 -- accel/accel.sh@21 -- # val= 00:07:40.059 17:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 17:53:56 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@21 -- # val= 00:07:41.436 17:53:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.436 17:53:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.436 17:53:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.436 17:53:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:41.436 17:53:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.436 ************************************ 00:07:41.436 END TEST accel_xor 00:07:41.436 ************************************ 00:07:41.436 00:07:41.436 real 0m2.942s 00:07:41.436 user 0m2.417s 00:07:41.436 sys 0m0.327s 00:07:41.436 17:53:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.436 17:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:41.436 17:53:58 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:41.436 17:53:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:41.436 17:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.436 17:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:41.436 ************************************ 00:07:41.436 START TEST accel_dif_verify 00:07:41.436 ************************************ 00:07:41.436 17:53:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:41.437 17:53:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.437 17:53:58 -- accel/accel.sh@17 -- # local accel_module 00:07:41.437 17:53:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:41.437 17:53:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.437 17:53:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:41.437 17:53:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.437 17:53:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.437 17:53:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.437 17:53:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.437 17:53:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.437 17:53:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.437 17:53:58 -- accel/accel.sh@42 -- # jq -r . 00:07:41.437 [2024-11-26 17:53:58.140465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.437 [2024-11-26 17:53:58.140591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71331 ] 00:07:41.437 [2024-11-26 17:53:58.292805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.437 [2024-11-26 17:53:58.340422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.813 17:53:59 -- accel/accel.sh@18 -- # out=' 00:07:42.813 SPDK Configuration: 00:07:42.813 Core mask: 0x1 00:07:42.813 00:07:42.813 Accel Perf Configuration: 00:07:42.813 Workload Type: dif_verify 00:07:42.813 Vector size: 4096 bytes 00:07:42.813 Transfer size: 4096 bytes 00:07:42.813 Block size: 512 bytes 00:07:42.813 Metadata size: 8 bytes 00:07:42.813 Vector count 1 00:07:42.813 Module: software 00:07:42.813 Queue depth: 32 00:07:42.813 Allocate depth: 32 00:07:42.813 # threads/core: 1 00:07:42.813 Run time: 1 seconds 00:07:42.813 Verify: No 00:07:42.813 00:07:42.813 Running for 1 seconds... 00:07:42.813 00:07:42.813 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.813 ------------------------------------------------------------------------------------ 00:07:42.813 0,0 118112/s 468 MiB/s 0 0 00:07:42.813 ==================================================================================== 00:07:42.813 Total 118112/s 461 MiB/s 0 0' 00:07:42.813 17:53:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:42.813 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:42.813 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:42.813 17:53:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:42.813 17:53:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.813 17:53:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.813 17:53:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.813 17:53:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.813 17:53:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.813 17:53:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.813 17:53:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.813 17:53:59 -- accel/accel.sh@42 -- # jq -r . 00:07:42.813 [2024-11-26 17:53:59.602995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.813 [2024-11-26 17:53:59.603210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71351 ] 00:07:43.072 [2024-11-26 17:53:59.763640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.072 [2024-11-26 17:53:59.812451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=0x1 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=dif_verify 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=software 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=32 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=32 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=1 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val=No 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.072 17:53:59 -- accel/accel.sh@21 -- # val= 00:07:43.072 17:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.072 17:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@21 -- # val= 00:07:44.486 17:54:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.486 17:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.486 17:54:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.486 17:54:01 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:44.486 17:54:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.486 00:07:44.486 real 0m2.954s 00:07:44.486 user 0m2.448s 00:07:44.486 sys 0m0.310s 00:07:44.486 17:54:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.486 17:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.486 ************************************ 00:07:44.486 END TEST accel_dif_verify 00:07:44.486 ************************************ 00:07:44.486 17:54:01 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:44.486 17:54:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:44.486 17:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.486 17:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.486 ************************************ 00:07:44.486 START TEST accel_dif_generate 00:07:44.486 ************************************ 00:07:44.486 17:54:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:44.486 17:54:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.486 17:54:01 -- accel/accel.sh@17 -- # local accel_module 00:07:44.486 17:54:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:44.486 17:54:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:44.486 17:54:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.486 17:54:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.486 17:54:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.486 17:54:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.486 17:54:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.486 17:54:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.486 17:54:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.486 17:54:01 -- accel/accel.sh@42 -- # jq -r . 00:07:44.486 [2024-11-26 17:54:01.168975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.486 [2024-11-26 17:54:01.169138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71386 ] 00:07:44.486 [2024-11-26 17:54:01.320613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.486 [2024-11-26 17:54:01.369987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.883 17:54:02 -- accel/accel.sh@18 -- # out=' 00:07:45.883 SPDK Configuration: 00:07:45.883 Core mask: 0x1 00:07:45.883 00:07:45.883 Accel Perf Configuration: 00:07:45.883 Workload Type: dif_generate 00:07:45.883 Vector size: 4096 bytes 00:07:45.883 Transfer size: 4096 bytes 00:07:45.883 Block size: 512 bytes 00:07:45.883 Metadata size: 8 bytes 00:07:45.883 Vector count 1 00:07:45.883 Module: software 00:07:45.883 Queue depth: 32 00:07:45.883 Allocate depth: 32 00:07:45.883 # threads/core: 1 00:07:45.883 Run time: 1 seconds 00:07:45.883 Verify: No 00:07:45.883 00:07:45.883 Running for 1 seconds... 00:07:45.883 00:07:45.883 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.883 ------------------------------------------------------------------------------------ 00:07:45.883 0,0 141664/s 562 MiB/s 0 0 00:07:45.883 ==================================================================================== 00:07:45.883 Total 141664/s 553 MiB/s 0 0' 00:07:45.883 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:45.883 17:54:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:45.883 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:45.883 17:54:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:45.883 17:54:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.883 17:54:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.883 17:54:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.883 17:54:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.883 17:54:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.883 17:54:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.883 17:54:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.883 17:54:02 -- accel/accel.sh@42 -- # jq -r . 00:07:45.883 [2024-11-26 17:54:02.642945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.883 [2024-11-26 17:54:02.643108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71407 ] 00:07:45.883 [2024-11-26 17:54:02.793005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.143 [2024-11-26 17:54:02.843988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=0x1 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=dif_generate 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=software 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=32 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=32 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=1 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val=No 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.143 17:54:02 -- accel/accel.sh@21 -- # val= 00:07:46.143 17:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.143 17:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:47.521 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.521 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.521 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.521 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.521 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.521 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.521 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.521 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.521 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.521 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.521 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.522 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.522 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.522 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.522 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.522 17:54:04 -- accel/accel.sh@21 -- # val= 00:07:47.522 17:54:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.522 17:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.522 17:54:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.522 17:54:04 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:47.522 17:54:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.522 00:07:47.522 real 0m2.947s 00:07:47.522 user 0m2.436s 00:07:47.522 sys 0m0.306s 00:07:47.522 17:54:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.522 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 ************************************ 00:07:47.522 END TEST accel_dif_generate 00:07:47.522 ************************************ 00:07:47.522 17:54:04 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:47.522 17:54:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:47.522 17:54:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.522 17:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:47.522 ************************************ 00:07:47.522 START TEST accel_dif_generate_copy 00:07:47.522 ************************************ 00:07:47.522 17:54:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:47.522 17:54:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.522 17:54:04 -- accel/accel.sh@17 -- # local accel_module 00:07:47.522 17:54:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:47.522 17:54:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:47.522 17:54:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.522 17:54:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.522 17:54:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.522 17:54:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.522 17:54:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.522 17:54:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.522 17:54:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.522 17:54:04 -- accel/accel.sh@42 -- # jq -r . 00:07:47.522 [2024-11-26 17:54:04.183053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.522 [2024-11-26 17:54:04.183611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71442 ] 00:07:47.522 [2024-11-26 17:54:04.336632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.522 [2024-11-26 17:54:04.387449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.899 17:54:05 -- accel/accel.sh@18 -- # out=' 00:07:48.899 SPDK Configuration: 00:07:48.899 Core mask: 0x1 00:07:48.899 00:07:48.899 Accel Perf Configuration: 00:07:48.899 Workload Type: dif_generate_copy 00:07:48.899 Vector size: 4096 bytes 00:07:48.899 Transfer size: 4096 bytes 00:07:48.899 Vector count 1 00:07:48.899 Module: software 00:07:48.899 Queue depth: 32 00:07:48.899 Allocate depth: 32 00:07:48.899 # threads/core: 1 00:07:48.899 Run time: 1 seconds 00:07:48.899 Verify: No 00:07:48.899 00:07:48.899 Running for 1 seconds... 00:07:48.899 00:07:48.899 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.899 ------------------------------------------------------------------------------------ 00:07:48.899 0,0 103840/s 411 MiB/s 0 0 00:07:48.899 ==================================================================================== 00:07:48.899 Total 103840/s 405 MiB/s 0 0' 00:07:48.899 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:48.899 17:54:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:48.899 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:48.899 17:54:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:48.899 17:54:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.899 17:54:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.899 17:54:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.899 17:54:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.899 17:54:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.899 17:54:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.899 17:54:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.899 17:54:05 -- accel/accel.sh@42 -- # jq -r . 00:07:48.899 [2024-11-26 17:54:05.661365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.899 [2024-11-26 17:54:05.661527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71463 ] 00:07:48.899 [2024-11-26 17:54:05.815219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.158 [2024-11-26 17:54:05.863848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=0x1 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=software 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=32 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=32 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=1 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.158 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.158 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.158 17:54:05 -- accel/accel.sh@21 -- # val=No 00:07:49.159 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.159 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.159 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.159 17:54:05 -- accel/accel.sh@21 -- # val= 00:07:49.159 17:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.159 17:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@21 -- # val= 00:07:50.633 17:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:50.633 17:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:50.633 17:54:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.633 17:54:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:50.633 17:54:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.633 00:07:50.633 real 0m2.955s 00:07:50.633 user 0m2.435s 00:07:50.633 sys 0m0.323s 00:07:50.633 ************************************ 00:07:50.633 END TEST accel_dif_generate_copy 00:07:50.633 ************************************ 00:07:50.633 17:54:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.633 17:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:50.633 17:54:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:50.633 17:54:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.633 17:54:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:50.633 17:54:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.633 17:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:50.633 ************************************ 00:07:50.633 START TEST accel_comp 00:07:50.633 ************************************ 00:07:50.633 17:54:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.633 17:54:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.633 17:54:07 -- accel/accel.sh@17 -- # local accel_module 00:07:50.633 17:54:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.633 17:54:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.633 17:54:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.633 17:54:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.633 17:54:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.633 17:54:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.633 17:54:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.633 17:54:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.633 17:54:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.633 17:54:07 -- accel/accel.sh@42 -- # jq -r . 00:07:50.633 [2024-11-26 17:54:07.214850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.633 [2024-11-26 17:54:07.215018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71498 ] 00:07:50.633 [2024-11-26 17:54:07.368564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.633 [2024-11-26 17:54:07.419177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.011 17:54:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:52.011 00:07:52.011 SPDK Configuration: 00:07:52.011 Core mask: 0x1 00:07:52.011 00:07:52.011 Accel Perf Configuration: 00:07:52.011 Workload Type: compress 00:07:52.011 Transfer size: 4096 bytes 00:07:52.011 Vector count 1 00:07:52.011 Module: software 00:07:52.011 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.011 Queue depth: 32 00:07:52.011 Allocate depth: 32 00:07:52.011 # threads/core: 1 00:07:52.011 Run time: 1 seconds 00:07:52.011 Verify: No 00:07:52.011 00:07:52.011 Running for 1 seconds... 00:07:52.011 00:07:52.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.011 ------------------------------------------------------------------------------------ 00:07:52.011 0,0 48640/s 202 MiB/s 0 0 00:07:52.011 ==================================================================================== 00:07:52.011 Total 48640/s 190 MiB/s 0 0' 00:07:52.011 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.011 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.011 17:54:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.011 17:54:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.011 17:54:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.011 17:54:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.011 17:54:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.011 17:54:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.011 17:54:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.011 17:54:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.011 17:54:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.011 17:54:08 -- accel/accel.sh@42 -- # jq -r . 00:07:52.011 [2024-11-26 17:54:08.710686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.011 [2024-11-26 17:54:08.711146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:07:52.011 [2024-11-26 17:54:08.866967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.011 [2024-11-26 17:54:08.918069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=0x1 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=compress 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=software 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=32 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=32 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=1 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val=No 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.270 17:54:08 -- accel/accel.sh@21 -- # val= 00:07:52.270 17:54:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.270 17:54:08 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@21 -- # val= 00:07:53.644 17:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:53.644 17:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:53.644 17:54:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.644 17:54:10 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:53.644 17:54:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.644 00:07:53.644 real 0m2.990s 00:07:53.644 user 0m2.468s 00:07:53.644 sys 0m0.314s 00:07:53.644 ************************************ 00:07:53.644 END TEST accel_comp 00:07:53.644 ************************************ 00:07:53.644 17:54:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.644 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:53.644 17:54:10 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.645 17:54:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:53.645 17:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.645 17:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:53.645 ************************************ 00:07:53.645 START TEST accel_decomp 00:07:53.645 ************************************ 00:07:53.645 17:54:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.645 17:54:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.645 17:54:10 -- accel/accel.sh@17 -- # local accel_module 00:07:53.645 17:54:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.645 17:54:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:53.645 17:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.645 17:54:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.645 17:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.645 17:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.645 17:54:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.645 17:54:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.645 17:54:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.645 17:54:10 -- accel/accel.sh@42 -- # jq -r . 00:07:53.645 [2024-11-26 17:54:10.273534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.645 [2024-11-26 17:54:10.273713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71549 ] 00:07:53.645 [2024-11-26 17:54:10.425885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.645 [2024-11-26 17:54:10.474612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.021 17:54:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.021 00:07:55.021 SPDK Configuration: 00:07:55.021 Core mask: 0x1 00:07:55.021 00:07:55.021 Accel Perf Configuration: 00:07:55.021 Workload Type: decompress 00:07:55.021 Transfer size: 4096 bytes 00:07:55.021 Vector count 1 00:07:55.021 Module: software 00:07:55.021 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.021 Queue depth: 32 00:07:55.021 Allocate depth: 32 00:07:55.021 # threads/core: 1 00:07:55.021 Run time: 1 seconds 00:07:55.021 Verify: Yes 00:07:55.021 00:07:55.021 Running for 1 seconds... 00:07:55.021 00:07:55.021 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.021 ------------------------------------------------------------------------------------ 00:07:55.021 0,0 58400/s 107 MiB/s 0 0 00:07:55.021 ==================================================================================== 00:07:55.021 Total 58400/s 228 MiB/s 0 0' 00:07:55.021 17:54:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.021 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.021 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.021 17:54:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.021 17:54:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.021 17:54:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.021 17:54:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.021 17:54:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.021 17:54:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.021 17:54:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.021 17:54:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.021 17:54:11 -- accel/accel.sh@42 -- # jq -r . 00:07:55.021 [2024-11-26 17:54:11.738100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.021 [2024-11-26 17:54:11.738344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71575 ] 00:07:55.021 [2024-11-26 17:54:11.898852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.280 [2024-11-26 17:54:11.947016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.280 17:54:11 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:11 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:11 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:11 -- accel/accel.sh@21 -- # val=0x1 00:07:55.280 17:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:11 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=decompress 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=software 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=32 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=32 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=1 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val=Yes 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:55.280 17:54:12 -- accel/accel.sh@21 -- # val= 00:07:55.280 17:54:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # IFS=: 00:07:55.280 17:54:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 17:54:13 -- accel/accel.sh@21 -- # val= 00:07:56.256 17:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:56.256 17:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:56.538 17:54:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.538 17:54:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.538 17:54:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.538 00:07:56.538 real 0m2.953s 00:07:56.538 user 0m2.429s 00:07:56.538 sys 0m0.322s 00:07:56.538 17:54:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.538 17:54:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.538 ************************************ 00:07:56.538 END TEST accel_decomp 00:07:56.538 ************************************ 00:07:56.538 17:54:13 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.538 17:54:13 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:56.538 17:54:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.538 17:54:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.538 ************************************ 00:07:56.538 START TEST accel_decmop_full 00:07:56.538 ************************************ 00:07:56.538 17:54:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.538 17:54:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.538 17:54:13 -- accel/accel.sh@17 -- # local accel_module 00:07:56.538 17:54:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.538 17:54:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.538 17:54:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.538 17:54:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.538 17:54:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.538 17:54:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.538 17:54:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.538 17:54:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.538 17:54:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.538 17:54:13 -- accel/accel.sh@42 -- # jq -r . 00:07:56.538 [2024-11-26 17:54:13.293617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.539 [2024-11-26 17:54:13.293765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71605 ] 00:07:56.539 [2024-11-26 17:54:13.446319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.798 [2024-11-26 17:54:13.494720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.177 17:54:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.177 00:07:58.177 SPDK Configuration: 00:07:58.177 Core mask: 0x1 00:07:58.177 00:07:58.177 Accel Perf Configuration: 00:07:58.177 Workload Type: decompress 00:07:58.177 Transfer size: 111250 bytes 00:07:58.177 Vector count 1 00:07:58.177 Module: software 00:07:58.177 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.177 Queue depth: 32 00:07:58.177 Allocate depth: 32 00:07:58.177 # threads/core: 1 00:07:58.177 Run time: 1 seconds 00:07:58.177 Verify: Yes 00:07:58.177 00:07:58.177 Running for 1 seconds... 00:07:58.177 00:07:58.177 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.177 ------------------------------------------------------------------------------------ 00:07:58.177 0,0 4416/s 182 MiB/s 0 0 00:07:58.177 ==================================================================================== 00:07:58.177 Total 4416/s 468 MiB/s 0 0' 00:07:58.177 17:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.177 17:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.177 17:54:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.177 17:54:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.177 17:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.177 17:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.177 17:54:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.177 17:54:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.177 17:54:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.177 17:54:14 -- accel/accel.sh@42 -- # jq -r . 00:07:58.177 [2024-11-26 17:54:14.771527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.177 [2024-11-26 17:54:14.771679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:07:58.177 [2024-11-26 17:54:14.921604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.177 [2024-11-26 17:54:14.969723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val=0x1 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val=decompress 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.177 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.177 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.177 17:54:15 -- accel/accel.sh@21 -- # val=software 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val=32 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val=32 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val=1 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val=Yes 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:58.178 17:54:15 -- accel/accel.sh@21 -- # val= 00:07:58.178 17:54:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # IFS=: 00:07:58.178 17:54:15 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@21 -- # val= 00:07:59.555 17:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:59.555 17:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:59.555 17:54:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.555 17:54:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.555 17:54:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.555 00:07:59.555 real 0m2.955s 00:07:59.555 user 0m2.441s 00:07:59.555 sys 0m0.314s 00:07:59.555 17:54:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.555 17:54:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.555 ************************************ 00:07:59.555 END TEST accel_decmop_full 00:07:59.555 ************************************ 00:07:59.555 17:54:16 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.555 17:54:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:59.555 17:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.555 17:54:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.555 ************************************ 00:07:59.555 START TEST accel_decomp_mcore 00:07:59.555 ************************************ 00:07:59.555 17:54:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.555 17:54:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.555 17:54:16 -- accel/accel.sh@17 -- # local accel_module 00:07:59.555 17:54:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.555 17:54:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.555 17:54:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.555 17:54:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.555 17:54:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.555 17:54:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.555 17:54:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.555 17:54:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.555 17:54:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.555 17:54:16 -- accel/accel.sh@42 -- # jq -r . 00:07:59.555 [2024-11-26 17:54:16.309145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.555 [2024-11-26 17:54:16.309319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71661 ] 00:07:59.555 [2024-11-26 17:54:16.460773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.814 [2024-11-26 17:54:16.514837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.814 [2024-11-26 17:54:16.514845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.814 [2024-11-26 17:54:16.514849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.814 [2024-11-26 17:54:16.514901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.189 17:54:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.189 00:08:01.189 SPDK Configuration: 00:08:01.189 Core mask: 0xf 00:08:01.189 00:08:01.189 Accel Perf Configuration: 00:08:01.189 Workload Type: decompress 00:08:01.189 Transfer size: 4096 bytes 00:08:01.189 Vector count 1 00:08:01.189 Module: software 00:08:01.189 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.189 Queue depth: 32 00:08:01.189 Allocate depth: 32 00:08:01.189 # threads/core: 1 00:08:01.189 Run time: 1 seconds 00:08:01.189 Verify: Yes 00:08:01.189 00:08:01.189 Running for 1 seconds... 00:08:01.189 00:08:01.189 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.189 ------------------------------------------------------------------------------------ 00:08:01.189 0,0 47424/s 87 MiB/s 0 0 00:08:01.189 3,0 46464/s 85 MiB/s 0 0 00:08:01.189 2,0 47168/s 86 MiB/s 0 0 00:08:01.189 1,0 50240/s 92 MiB/s 0 0 00:08:01.189 ==================================================================================== 00:08:01.189 Total 191296/s 747 MiB/s 0 0' 00:08:01.189 17:54:17 -- accel/accel.sh@20 -- # IFS=: 00:08:01.189 17:54:17 -- accel/accel.sh@20 -- # read -r var val 00:08:01.189 17:54:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.189 17:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.189 17:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.189 17:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.189 17:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.189 17:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.189 17:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.189 17:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.189 17:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.189 17:54:17 -- accel/accel.sh@42 -- # jq -r . 00:08:01.189 [2024-11-26 17:54:17.909925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.189 [2024-11-26 17:54:17.910089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71690 ] 00:08:01.189 [2024-11-26 17:54:18.061835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.447 [2024-11-26 17:54:18.144837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.447 [2024-11-26 17:54:18.145037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.447 [2024-11-26 17:54:18.145136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.447 [2024-11-26 17:54:18.145273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val=0xf 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.447 17:54:18 -- accel/accel.sh@21 -- # val=decompress 00:08:01.447 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.447 17:54:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.447 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=software 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=32 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=32 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=1 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val=Yes 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:01.448 17:54:18 -- accel/accel.sh@21 -- # val= 00:08:01.448 17:54:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # IFS=: 00:08:01.448 17:54:18 -- accel/accel.sh@20 -- # read -r var val 00:08:02.896 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.896 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.896 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.896 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.896 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@21 -- # val= 00:08:02.897 17:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # IFS=: 00:08:02.897 17:54:19 -- accel/accel.sh@20 -- # read -r var val 00:08:02.897 17:54:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.897 17:54:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:02.897 ************************************ 00:08:02.897 END TEST accel_decomp_mcore 00:08:02.897 ************************************ 00:08:02.897 17:54:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.897 00:08:02.897 real 0m3.260s 00:08:02.897 user 0m4.852s 00:08:02.897 sys 0m0.213s 00:08:02.897 17:54:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.897 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:02.897 17:54:19 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.897 17:54:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:02.897 17:54:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.897 17:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:02.897 ************************************ 00:08:02.897 START TEST accel_decomp_full_mcore 00:08:02.897 ************************************ 00:08:02.897 17:54:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.897 17:54:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.897 17:54:19 -- accel/accel.sh@17 -- # local accel_module 00:08:02.897 17:54:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.897 17:54:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.897 17:54:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.897 17:54:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.897 17:54:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.897 17:54:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.897 17:54:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.897 17:54:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.897 17:54:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.897 17:54:19 -- accel/accel.sh@42 -- # jq -r . 00:08:02.897 [2024-11-26 17:54:19.637099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.897 [2024-11-26 17:54:19.637465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71723 ] 00:08:02.897 [2024-11-26 17:54:19.791805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.155 [2024-11-26 17:54:19.877697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.155 [2024-11-26 17:54:19.877952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.155 [2024-11-26 17:54:19.878159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.155 [2024-11-26 17:54:19.878014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.535 17:54:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:04.535 00:08:04.535 SPDK Configuration: 00:08:04.535 Core mask: 0xf 00:08:04.535 00:08:04.535 Accel Perf Configuration: 00:08:04.535 Workload Type: decompress 00:08:04.535 Transfer size: 111250 bytes 00:08:04.535 Vector count 1 00:08:04.535 Module: software 00:08:04.535 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.535 Queue depth: 32 00:08:04.535 Allocate depth: 32 00:08:04.535 # threads/core: 1 00:08:04.535 Run time: 1 seconds 00:08:04.535 Verify: Yes 00:08:04.535 00:08:04.535 Running for 1 seconds... 00:08:04.535 00:08:04.535 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:04.535 ------------------------------------------------------------------------------------ 00:08:04.535 0,0 4032/s 166 MiB/s 0 0 00:08:04.535 3,0 4352/s 179 MiB/s 0 0 00:08:04.535 2,0 4352/s 179 MiB/s 0 0 00:08:04.535 1,0 4384/s 181 MiB/s 0 0 00:08:04.535 ==================================================================================== 00:08:04.535 Total 17120/s 1816 MiB/s 0 0' 00:08:04.535 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.535 17:54:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.535 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.535 17:54:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.535 17:54:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.535 17:54:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.535 17:54:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.535 17:54:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.535 17:54:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.535 17:54:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.535 17:54:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.535 17:54:21 -- accel/accel.sh@42 -- # jq -r . 00:08:04.535 [2024-11-26 17:54:21.316412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.535 [2024-11-26 17:54:21.316577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71752 ] 00:08:04.794 [2024-11-26 17:54:21.470666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.794 [2024-11-26 17:54:21.553491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.794 [2024-11-26 17:54:21.553670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.794 [2024-11-26 17:54:21.553762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.794 [2024-11-26 17:54:21.553892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val=0xf 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.794 17:54:21 -- accel/accel.sh@21 -- # val=decompress 00:08:04.794 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.794 17:54:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.794 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=software 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@23 -- # accel_module=software 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=32 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=32 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=1 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val=Yes 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:04.795 17:54:21 -- accel/accel.sh@21 -- # val= 00:08:04.795 17:54:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # IFS=: 00:08:04.795 17:54:21 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@21 -- # val= 00:08:06.172 17:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # IFS=: 00:08:06.172 17:54:22 -- accel/accel.sh@20 -- # read -r var val 00:08:06.172 17:54:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:06.172 17:54:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:06.172 17:54:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.172 00:08:06.172 real 0m3.370s 00:08:06.172 user 0m5.029s 00:08:06.172 sys 0m0.265s 00:08:06.172 17:54:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.172 17:54:22 -- common/autotest_common.sh@10 -- # set +x 00:08:06.172 ************************************ 00:08:06.172 END TEST accel_decomp_full_mcore 00:08:06.172 ************************************ 00:08:06.172 17:54:22 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.172 17:54:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:06.172 17:54:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.172 17:54:23 -- common/autotest_common.sh@10 -- # set +x 00:08:06.172 ************************************ 00:08:06.172 START TEST accel_decomp_mthread 00:08:06.172 ************************************ 00:08:06.172 17:54:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.172 17:54:23 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.172 17:54:23 -- accel/accel.sh@17 -- # local accel_module 00:08:06.172 17:54:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.172 17:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.172 17:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.172 17:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.172 17:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.172 17:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.172 17:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.172 17:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.172 17:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.172 17:54:23 -- accel/accel.sh@42 -- # jq -r . 00:08:06.172 [2024-11-26 17:54:23.064947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.172 [2024-11-26 17:54:23.065083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:08:06.431 [2024-11-26 17:54:23.217066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.431 [2024-11-26 17:54:23.294058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.811 17:54:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:07.811 00:08:07.811 SPDK Configuration: 00:08:07.811 Core mask: 0x1 00:08:07.811 00:08:07.811 Accel Perf Configuration: 00:08:07.811 Workload Type: decompress 00:08:07.811 Transfer size: 4096 bytes 00:08:07.811 Vector count 1 00:08:07.811 Module: software 00:08:07.811 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.811 Queue depth: 32 00:08:07.811 Allocate depth: 32 00:08:07.811 # threads/core: 2 00:08:07.811 Run time: 1 seconds 00:08:07.811 Verify: Yes 00:08:07.811 00:08:07.811 Running for 1 seconds... 00:08:07.811 00:08:07.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.811 ------------------------------------------------------------------------------------ 00:08:07.811 0,1 29056/s 53 MiB/s 0 0 00:08:07.811 0,0 28928/s 53 MiB/s 0 0 00:08:07.811 ==================================================================================== 00:08:07.811 Total 57984/s 226 MiB/s 0 0' 00:08:07.811 17:54:24 -- accel/accel.sh@20 -- # IFS=: 00:08:07.811 17:54:24 -- accel/accel.sh@20 -- # read -r var val 00:08:07.811 17:54:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:07.811 17:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.811 17:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:07.811 17:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.811 17:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.811 17:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.812 17:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.812 17:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.812 17:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.812 17:54:24 -- accel/accel.sh@42 -- # jq -r . 00:08:07.812 [2024-11-26 17:54:24.713345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.812 [2024-11-26 17:54:24.713517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71811 ] 00:08:08.071 [2024-11-26 17:54:24.866560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.071 [2024-11-26 17:54:24.946433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=0x1 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=decompress 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=software 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@23 -- # accel_module=software 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=32 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=32 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=2 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val=Yes 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:08.330 17:54:25 -- accel/accel.sh@21 -- # val= 00:08:08.330 17:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # IFS=: 00:08:08.330 17:54:25 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@21 -- # val= 00:08:09.743 17:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # IFS=: 00:08:09.743 17:54:26 -- accel/accel.sh@20 -- # read -r var val 00:08:09.743 17:54:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:09.743 17:54:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:09.743 17:54:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.743 00:08:09.743 real 0m3.273s 00:08:09.743 user 0m2.650s 00:08:09.743 sys 0m0.418s 00:08:09.743 17:54:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.743 17:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:09.743 ************************************ 00:08:09.743 END TEST accel_decomp_mthread 00:08:09.743 ************************************ 00:08:09.743 17:54:26 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.743 17:54:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:09.743 17:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.743 17:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:09.743 ************************************ 00:08:09.743 START TEST accel_deomp_full_mthread 00:08:09.743 ************************************ 00:08:09.743 17:54:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.743 17:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.743 17:54:26 -- accel/accel.sh@17 -- # local accel_module 00:08:09.743 17:54:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.743 17:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.743 17:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.743 17:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.743 17:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.743 17:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.743 17:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.743 17:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.743 17:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.743 17:54:26 -- accel/accel.sh@42 -- # jq -r . 00:08:09.743 [2024-11-26 17:54:26.409094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.743 [2024-11-26 17:54:26.409238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71847 ] 00:08:09.743 [2024-11-26 17:54:26.555974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.743 [2024-11-26 17:54:26.614838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.121 17:54:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:11.121 00:08:11.121 SPDK Configuration: 00:08:11.121 Core mask: 0x1 00:08:11.121 00:08:11.121 Accel Perf Configuration: 00:08:11.121 Workload Type: decompress 00:08:11.121 Transfer size: 111250 bytes 00:08:11.121 Vector count 1 00:08:11.121 Module: software 00:08:11.121 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.121 Queue depth: 32 00:08:11.121 Allocate depth: 32 00:08:11.121 # threads/core: 2 00:08:11.121 Run time: 1 seconds 00:08:11.121 Verify: Yes 00:08:11.121 00:08:11.121 Running for 1 seconds... 00:08:11.121 00:08:11.121 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:11.121 ------------------------------------------------------------------------------------ 00:08:11.121 0,1 2208/s 91 MiB/s 0 0 00:08:11.121 0,0 2176/s 89 MiB/s 0 0 00:08:11.121 ==================================================================================== 00:08:11.121 Total 4384/s 465 MiB/s 0 0' 00:08:11.121 17:54:27 -- accel/accel.sh@20 -- # IFS=: 00:08:11.121 17:54:27 -- accel/accel.sh@20 -- # read -r var val 00:08:11.121 17:54:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.121 17:54:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:11.121 17:54:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.121 17:54:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.121 17:54:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.121 17:54:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.121 17:54:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.121 17:54:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.121 17:54:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.121 17:54:27 -- accel/accel.sh@42 -- # jq -r . 00:08:11.121 [2024-11-26 17:54:27.938201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.121 [2024-11-26 17:54:27.938345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71873 ] 00:08:11.381 [2024-11-26 17:54:28.090903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.381 [2024-11-26 17:54:28.138171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=0x1 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=decompress 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=software 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=32 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=32 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=2 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val=Yes 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.381 17:54:28 -- accel/accel.sh@21 -- # val= 00:08:11.381 17:54:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # IFS=: 00:08:11.381 17:54:28 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@21 -- # val= 00:08:12.761 17:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # IFS=: 00:08:12.761 17:54:29 -- accel/accel.sh@20 -- # read -r var val 00:08:12.761 17:54:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:12.761 17:54:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:12.761 17:54:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.761 00:08:12.761 real 0m3.021s 00:08:12.761 user 0m2.499s 00:08:12.761 sys 0m0.314s 00:08:12.761 17:54:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.761 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:08:12.761 ************************************ 00:08:12.761 END TEST accel_deomp_full_mthread 00:08:12.761 ************************************ 00:08:12.761 17:54:29 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:12.761 17:54:29 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.761 17:54:29 -- accel/accel.sh@129 -- # build_accel_config 00:08:12.761 17:54:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.761 17:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.761 17:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.761 17:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.761 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:08:12.761 17:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.761 17:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.761 17:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.761 17:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.761 17:54:29 -- accel/accel.sh@42 -- # jq -r . 00:08:12.761 ************************************ 00:08:12.761 START TEST accel_dif_functional_tests 00:08:12.761 ************************************ 00:08:12.761 17:54:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.761 [2024-11-26 17:54:29.529036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.761 [2024-11-26 17:54:29.529169] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71904 ] 00:08:12.761 [2024-11-26 17:54:29.681831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.020 [2024-11-26 17:54:29.725617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.020 [2024-11-26 17:54:29.725737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.020 [2024-11-26 17:54:29.725627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.020 00:08:13.020 00:08:13.020 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.020 http://cunit.sourceforge.net/ 00:08:13.020 00:08:13.020 00:08:13.020 Suite: accel_dif 00:08:13.020 Test: verify: DIF generated, GUARD check ...passed 00:08:13.020 Test: verify: DIF generated, APPTAG check ...passed 00:08:13.020 Test: verify: DIF generated, REFTAG check ...passed 00:08:13.020 Test: verify: DIF not generated, GUARD check ...passed 00:08:13.020 Test: verify: DIF not generated, APPTAG check ...passed 00:08:13.020 Test: verify: DIF not generated, REFTAG check ...[2024-11-26 17:54:29.795186] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:13.020 [2024-11-26 17:54:29.795259] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:13.020 [2024-11-26 17:54:29.795334] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:13.020 [2024-11-26 17:54:29.795383] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:13.020 passed 00:08:13.020 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:13.020 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:13.020 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:13.020 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-11-26 17:54:29.795446] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:13.020 [2024-11-26 17:54:29.795551] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:13.020 [2024-11-26 17:54:29.795627] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:13.020 passed 00:08:13.020 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:13.020 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:13.020 Test: generate copy: DIF generated, GUARD check ...passed 00:08:13.020 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:13.020 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:13.020 Test: generate copy: DIF generated, no GUARD check flag set ...passed[2024-11-26 17:54:29.795859] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:13.020 00:08:13.020 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:13.020 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:13.020 Test: generate copy: iovecs-len validate ...passed 00:08:13.021 Test: generate copy: buffer alignment validate ...passed[2024-11-26 17:54:29.796438] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:13.021 00:08:13.021 00:08:13.021 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.021 suites 1 1 n/a 0 0 00:08:13.021 tests 20 20 20 0 0 00:08:13.021 asserts 204 204 204 0 n/a 00:08:13.021 00:08:13.021 Elapsed time = 0.004 seconds 00:08:13.279 00:08:13.279 real 0m0.572s 00:08:13.279 user 0m0.644s 00:08:13.279 sys 0m0.209s 00:08:13.279 17:54:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.279 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.279 ************************************ 00:08:13.279 END TEST accel_dif_functional_tests 00:08:13.279 ************************************ 00:08:13.279 00:08:13.279 real 1m6.198s 00:08:13.279 user 1m8.091s 00:08:13.279 sys 0m9.279s 00:08:13.279 17:54:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.279 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.279 ************************************ 00:08:13.279 END TEST accel 00:08:13.279 ************************************ 00:08:13.279 17:54:30 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:13.279 17:54:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.279 17:54:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.279 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.279 ************************************ 00:08:13.279 START TEST accel_rpc 00:08:13.279 ************************************ 00:08:13.279 17:54:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:13.539 * Looking for test storage... 00:08:13.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:13.539 17:54:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.539 17:54:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.539 17:54:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.539 17:54:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.539 17:54:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.539 17:54:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.539 17:54:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.539 17:54:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.539 17:54:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.539 17:54:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.539 17:54:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.539 17:54:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.539 17:54:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.539 17:54:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.539 17:54:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.539 17:54:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.539 17:54:30 -- scripts/common.sh@344 -- # : 1 00:08:13.539 17:54:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.539 17:54:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.539 17:54:30 -- scripts/common.sh@364 -- # decimal 1 00:08:13.539 17:54:30 -- scripts/common.sh@352 -- # local d=1 00:08:13.539 17:54:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.539 17:54:30 -- scripts/common.sh@354 -- # echo 1 00:08:13.539 17:54:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.539 17:54:30 -- scripts/common.sh@365 -- # decimal 2 00:08:13.539 17:54:30 -- scripts/common.sh@352 -- # local d=2 00:08:13.539 17:54:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.539 17:54:30 -- scripts/common.sh@354 -- # echo 2 00:08:13.539 17:54:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.539 17:54:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.539 17:54:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.539 17:54:30 -- scripts/common.sh@367 -- # return 0 00:08:13.539 17:54:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.539 17:54:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.539 --rc genhtml_branch_coverage=1 00:08:13.539 --rc genhtml_function_coverage=1 00:08:13.539 --rc genhtml_legend=1 00:08:13.539 --rc geninfo_all_blocks=1 00:08:13.539 --rc geninfo_unexecuted_blocks=1 00:08:13.539 00:08:13.539 ' 00:08:13.539 17:54:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.539 --rc genhtml_branch_coverage=1 00:08:13.539 --rc genhtml_function_coverage=1 00:08:13.539 --rc genhtml_legend=1 00:08:13.539 --rc geninfo_all_blocks=1 00:08:13.539 --rc geninfo_unexecuted_blocks=1 00:08:13.539 00:08:13.539 ' 00:08:13.539 17:54:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.539 --rc genhtml_branch_coverage=1 00:08:13.539 --rc genhtml_function_coverage=1 00:08:13.539 --rc genhtml_legend=1 00:08:13.539 --rc geninfo_all_blocks=1 00:08:13.539 --rc geninfo_unexecuted_blocks=1 00:08:13.539 00:08:13.539 ' 00:08:13.539 17:54:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.539 --rc genhtml_branch_coverage=1 00:08:13.539 --rc genhtml_function_coverage=1 00:08:13.539 --rc genhtml_legend=1 00:08:13.539 --rc geninfo_all_blocks=1 00:08:13.539 --rc geninfo_unexecuted_blocks=1 00:08:13.539 00:08:13.539 ' 00:08:13.539 17:54:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.539 17:54:30 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:13.539 17:54:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71982 00:08:13.539 17:54:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 71982 00:08:13.539 17:54:30 -- common/autotest_common.sh@829 -- # '[' -z 71982 ']' 00:08:13.539 17:54:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.539 17:54:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.539 17:54:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.539 17:54:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.539 17:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 [2024-11-26 17:54:30.456236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.539 [2024-11-26 17:54:30.456446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71982 ] 00:08:13.798 [2024-11-26 17:54:30.620063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.798 [2024-11-26 17:54:30.669064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.798 [2024-11-26 17:54:30.669259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.366 17:54:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.366 17:54:31 -- common/autotest_common.sh@862 -- # return 0 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:14.366 17:54:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.366 17:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.366 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.366 ************************************ 00:08:14.366 START TEST accel_assign_opcode 00:08:14.366 ************************************ 00:08:14.366 17:54:31 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:14.366 17:54:31 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:14.366 17:54:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.366 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.366 [2024-11-26 17:54:31.285296] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:14.366 17:54:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.367 17:54:31 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:14.626 17:54:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 [2024-11-26 17:54:31.297224] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:14.626 17:54:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 17:54:31 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:14.626 17:54:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 17:54:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 17:54:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:14.626 17:54:31 -- accel/accel_rpc.sh@42 -- # grep software 00:08:14.626 17:54:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.626 17:54:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:14.626 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.626 17:54:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.626 software 00:08:14.626 ************************************ 00:08:14.626 END TEST accel_assign_opcode 00:08:14.626 ************************************ 00:08:14.626 00:08:14.626 real 0m0.249s 00:08:14.626 user 0m0.042s 00:08:14.626 sys 0m0.015s 00:08:14.626 17:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.626 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.885 17:54:31 -- accel/accel_rpc.sh@55 -- # killprocess 71982 00:08:14.885 17:54:31 -- common/autotest_common.sh@936 -- # '[' -z 71982 ']' 00:08:14.885 17:54:31 -- common/autotest_common.sh@940 -- # kill -0 71982 00:08:14.885 17:54:31 -- common/autotest_common.sh@941 -- # uname 00:08:14.885 17:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.885 17:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71982 00:08:14.885 killing process with pid 71982 00:08:14.885 17:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.885 17:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.885 17:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71982' 00:08:14.885 17:54:31 -- common/autotest_common.sh@955 -- # kill 71982 00:08:14.885 17:54:31 -- common/autotest_common.sh@960 -- # wait 71982 00:08:15.144 00:08:15.144 real 0m1.867s 00:08:15.144 user 0m1.778s 00:08:15.144 sys 0m0.568s 00:08:15.144 17:54:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.144 ************************************ 00:08:15.144 END TEST accel_rpc 00:08:15.144 ************************************ 00:08:15.144 17:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 17:54:32 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:15.144 17:54:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.144 17:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.144 17:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.403 ************************************ 00:08:15.403 START TEST app_cmdline 00:08:15.403 ************************************ 00:08:15.403 17:54:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:15.403 * Looking for test storage... 00:08:15.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:15.403 17:54:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:15.403 17:54:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:15.403 17:54:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:15.403 17:54:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:15.403 17:54:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:15.403 17:54:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:15.403 17:54:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:15.403 17:54:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:15.403 17:54:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:15.403 17:54:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.403 17:54:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:15.403 17:54:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:15.403 17:54:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:15.403 17:54:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:15.403 17:54:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:15.403 17:54:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:15.403 17:54:32 -- scripts/common.sh@344 -- # : 1 00:08:15.403 17:54:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:15.403 17:54:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.403 17:54:32 -- scripts/common.sh@364 -- # decimal 1 00:08:15.403 17:54:32 -- scripts/common.sh@352 -- # local d=1 00:08:15.403 17:54:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.403 17:54:32 -- scripts/common.sh@354 -- # echo 1 00:08:15.403 17:54:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:15.403 17:54:32 -- scripts/common.sh@365 -- # decimal 2 00:08:15.403 17:54:32 -- scripts/common.sh@352 -- # local d=2 00:08:15.403 17:54:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.403 17:54:32 -- scripts/common.sh@354 -- # echo 2 00:08:15.403 17:54:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:15.403 17:54:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:15.403 17:54:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:15.403 17:54:32 -- scripts/common.sh@367 -- # return 0 00:08:15.403 17:54:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.403 17:54:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.403 --rc genhtml_branch_coverage=1 00:08:15.403 --rc genhtml_function_coverage=1 00:08:15.403 --rc genhtml_legend=1 00:08:15.403 --rc geninfo_all_blocks=1 00:08:15.403 --rc geninfo_unexecuted_blocks=1 00:08:15.403 00:08:15.403 ' 00:08:15.403 17:54:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.403 --rc genhtml_branch_coverage=1 00:08:15.403 --rc genhtml_function_coverage=1 00:08:15.403 --rc genhtml_legend=1 00:08:15.403 --rc geninfo_all_blocks=1 00:08:15.403 --rc geninfo_unexecuted_blocks=1 00:08:15.403 00:08:15.403 ' 00:08:15.403 17:54:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.403 --rc genhtml_branch_coverage=1 00:08:15.403 --rc genhtml_function_coverage=1 00:08:15.403 --rc genhtml_legend=1 00:08:15.403 --rc geninfo_all_blocks=1 00:08:15.403 --rc geninfo_unexecuted_blocks=1 00:08:15.403 00:08:15.403 ' 00:08:15.403 17:54:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:15.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.403 --rc genhtml_branch_coverage=1 00:08:15.403 --rc genhtml_function_coverage=1 00:08:15.403 --rc genhtml_legend=1 00:08:15.403 --rc geninfo_all_blocks=1 00:08:15.403 --rc geninfo_unexecuted_blocks=1 00:08:15.403 00:08:15.403 ' 00:08:15.403 17:54:32 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:15.403 17:54:32 -- app/cmdline.sh@17 -- # spdk_tgt_pid=72083 00:08:15.403 17:54:32 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:15.403 17:54:32 -- app/cmdline.sh@18 -- # waitforlisten 72083 00:08:15.403 17:54:32 -- common/autotest_common.sh@829 -- # '[' -z 72083 ']' 00:08:15.403 17:54:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.403 17:54:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.403 17:54:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.403 17:54:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.403 17:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.662 [2024-11-26 17:54:32.382991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:15.662 [2024-11-26 17:54:32.383126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:08:15.662 [2024-11-26 17:54:32.534085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.662 [2024-11-26 17:54:32.574132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.662 [2024-11-26 17:54:32.574329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.621 17:54:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.621 17:54:33 -- common/autotest_common.sh@862 -- # return 0 00:08:16.621 17:54:33 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:16.621 { 00:08:16.621 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:16.621 "fields": { 00:08:16.621 "major": 24, 00:08:16.621 "minor": 1, 00:08:16.621 "patch": 1, 00:08:16.621 "suffix": "-pre", 00:08:16.621 "commit": "c13c99a5e" 00:08:16.621 } 00:08:16.621 } 00:08:16.622 17:54:33 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:16.622 17:54:33 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:16.622 17:54:33 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:16.622 17:54:33 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:16.622 17:54:33 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:16.622 17:54:33 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:16.622 17:54:33 -- app/cmdline.sh@26 -- # sort 00:08:16.622 17:54:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.622 17:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.622 17:54:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.622 17:54:33 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:16.622 17:54:33 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:16.622 17:54:33 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.622 17:54:33 -- common/autotest_common.sh@650 -- # local es=0 00:08:16.622 17:54:33 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.622 17:54:33 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.622 17:54:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.622 17:54:33 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.622 17:54:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.622 17:54:33 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.622 17:54:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.622 17:54:33 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.622 17:54:33 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:16.622 17:54:33 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.880 request: 00:08:16.880 { 00:08:16.880 "method": "env_dpdk_get_mem_stats", 00:08:16.880 "req_id": 1 00:08:16.880 } 00:08:16.880 Got JSON-RPC error response 00:08:16.880 response: 00:08:16.880 { 00:08:16.880 "code": -32601, 00:08:16.880 "message": "Method not found" 00:08:16.880 } 00:08:16.880 17:54:33 -- common/autotest_common.sh@653 -- # es=1 00:08:16.880 17:54:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.880 17:54:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.880 17:54:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.880 17:54:33 -- app/cmdline.sh@1 -- # killprocess 72083 00:08:16.880 17:54:33 -- common/autotest_common.sh@936 -- # '[' -z 72083 ']' 00:08:16.880 17:54:33 -- common/autotest_common.sh@940 -- # kill -0 72083 00:08:16.880 17:54:33 -- common/autotest_common.sh@941 -- # uname 00:08:16.880 17:54:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:16.880 17:54:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72083 00:08:16.880 killing process with pid 72083 00:08:16.880 17:54:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:16.880 17:54:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:16.880 17:54:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72083' 00:08:16.880 17:54:33 -- common/autotest_common.sh@955 -- # kill 72083 00:08:16.880 17:54:33 -- common/autotest_common.sh@960 -- # wait 72083 00:08:17.449 00:08:17.449 real 0m2.042s 00:08:17.449 user 0m2.320s 00:08:17.449 sys 0m0.556s 00:08:17.449 ************************************ 00:08:17.449 END TEST app_cmdline 00:08:17.449 ************************************ 00:08:17.449 17:54:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.449 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:17.449 17:54:34 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.449 17:54:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.449 17:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.449 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:17.449 ************************************ 00:08:17.449 START TEST version 00:08:17.449 ************************************ 00:08:17.449 17:54:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.449 * Looking for test storage... 00:08:17.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:17.449 17:54:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.449 17:54:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.449 17:54:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.709 17:54:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.709 17:54:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.709 17:54:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.709 17:54:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.709 17:54:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.709 17:54:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.709 17:54:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.709 17:54:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.709 17:54:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.709 17:54:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.709 17:54:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.709 17:54:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.709 17:54:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.709 17:54:34 -- scripts/common.sh@344 -- # : 1 00:08:17.709 17:54:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.709 17:54:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.709 17:54:34 -- scripts/common.sh@364 -- # decimal 1 00:08:17.709 17:54:34 -- scripts/common.sh@352 -- # local d=1 00:08:17.709 17:54:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.709 17:54:34 -- scripts/common.sh@354 -- # echo 1 00:08:17.709 17:54:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.709 17:54:34 -- scripts/common.sh@365 -- # decimal 2 00:08:17.709 17:54:34 -- scripts/common.sh@352 -- # local d=2 00:08:17.709 17:54:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.709 17:54:34 -- scripts/common.sh@354 -- # echo 2 00:08:17.709 17:54:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.709 17:54:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.709 17:54:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.709 17:54:34 -- scripts/common.sh@367 -- # return 0 00:08:17.709 17:54:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.709 17:54:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.709 --rc genhtml_branch_coverage=1 00:08:17.709 --rc genhtml_function_coverage=1 00:08:17.709 --rc genhtml_legend=1 00:08:17.709 --rc geninfo_all_blocks=1 00:08:17.709 --rc geninfo_unexecuted_blocks=1 00:08:17.709 00:08:17.709 ' 00:08:17.709 17:54:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.709 --rc genhtml_branch_coverage=1 00:08:17.709 --rc genhtml_function_coverage=1 00:08:17.709 --rc genhtml_legend=1 00:08:17.709 --rc geninfo_all_blocks=1 00:08:17.709 --rc geninfo_unexecuted_blocks=1 00:08:17.709 00:08:17.709 ' 00:08:17.709 17:54:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.709 --rc genhtml_branch_coverage=1 00:08:17.710 --rc genhtml_function_coverage=1 00:08:17.710 --rc genhtml_legend=1 00:08:17.710 --rc geninfo_all_blocks=1 00:08:17.710 --rc geninfo_unexecuted_blocks=1 00:08:17.710 00:08:17.710 ' 00:08:17.710 17:54:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.710 --rc genhtml_branch_coverage=1 00:08:17.710 --rc genhtml_function_coverage=1 00:08:17.710 --rc genhtml_legend=1 00:08:17.710 --rc geninfo_all_blocks=1 00:08:17.710 --rc geninfo_unexecuted_blocks=1 00:08:17.710 00:08:17.710 ' 00:08:17.710 17:54:34 -- app/version.sh@17 -- # get_header_version major 00:08:17.710 17:54:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.710 17:54:34 -- app/version.sh@14 -- # cut -f2 00:08:17.710 17:54:34 -- app/version.sh@14 -- # tr -d '"' 00:08:17.710 17:54:34 -- app/version.sh@17 -- # major=24 00:08:17.710 17:54:34 -- app/version.sh@18 -- # get_header_version minor 00:08:17.710 17:54:34 -- app/version.sh@14 -- # cut -f2 00:08:17.710 17:54:34 -- app/version.sh@14 -- # tr -d '"' 00:08:17.710 17:54:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.710 17:54:34 -- app/version.sh@18 -- # minor=1 00:08:17.710 17:54:34 -- app/version.sh@19 -- # get_header_version patch 00:08:17.710 17:54:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.710 17:54:34 -- app/version.sh@14 -- # cut -f2 00:08:17.710 17:54:34 -- app/version.sh@14 -- # tr -d '"' 00:08:17.710 17:54:34 -- app/version.sh@19 -- # patch=1 00:08:17.710 17:54:34 -- app/version.sh@20 -- # get_header_version suffix 00:08:17.710 17:54:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.710 17:54:34 -- app/version.sh@14 -- # cut -f2 00:08:17.710 17:54:34 -- app/version.sh@14 -- # tr -d '"' 00:08:17.710 17:54:34 -- app/version.sh@20 -- # suffix=-pre 00:08:17.710 17:54:34 -- app/version.sh@22 -- # version=24.1 00:08:17.710 17:54:34 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:17.710 17:54:34 -- app/version.sh@25 -- # version=24.1.1 00:08:17.710 17:54:34 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:17.710 17:54:34 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:17.710 17:54:34 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:17.710 17:54:34 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:17.710 17:54:34 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:17.710 ************************************ 00:08:17.710 END TEST version 00:08:17.710 ************************************ 00:08:17.710 00:08:17.710 real 0m0.308s 00:08:17.710 user 0m0.191s 00:08:17.710 sys 0m0.173s 00:08:17.710 17:54:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.710 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:17.710 17:54:34 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:17.710 17:54:34 -- spdk/autotest.sh@191 -- # uname -s 00:08:17.710 17:54:34 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:17.710 17:54:34 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:17.710 17:54:34 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:17.710 17:54:34 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:08:17.710 17:54:34 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:17.710 17:54:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.710 17:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.710 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:17.710 ************************************ 00:08:17.710 START TEST blockdev_nvme 00:08:17.710 ************************************ 00:08:17.710 17:54:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:17.970 * Looking for test storage... 00:08:17.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:17.970 17:54:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.970 17:54:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.970 17:54:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.970 17:54:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.970 17:54:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.970 17:54:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.970 17:54:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.970 17:54:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.970 17:54:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.970 17:54:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.970 17:54:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.970 17:54:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.970 17:54:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.970 17:54:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.970 17:54:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.970 17:54:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.970 17:54:34 -- scripts/common.sh@344 -- # : 1 00:08:17.970 17:54:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.970 17:54:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.970 17:54:34 -- scripts/common.sh@364 -- # decimal 1 00:08:17.970 17:54:34 -- scripts/common.sh@352 -- # local d=1 00:08:17.970 17:54:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.970 17:54:34 -- scripts/common.sh@354 -- # echo 1 00:08:17.970 17:54:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.970 17:54:34 -- scripts/common.sh@365 -- # decimal 2 00:08:17.970 17:54:34 -- scripts/common.sh@352 -- # local d=2 00:08:17.970 17:54:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.970 17:54:34 -- scripts/common.sh@354 -- # echo 2 00:08:17.970 17:54:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.970 17:54:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.970 17:54:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.970 17:54:34 -- scripts/common.sh@367 -- # return 0 00:08:17.970 17:54:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.970 17:54:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.970 --rc genhtml_branch_coverage=1 00:08:17.970 --rc genhtml_function_coverage=1 00:08:17.970 --rc genhtml_legend=1 00:08:17.970 --rc geninfo_all_blocks=1 00:08:17.970 --rc geninfo_unexecuted_blocks=1 00:08:17.970 00:08:17.970 ' 00:08:17.970 17:54:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.970 --rc genhtml_branch_coverage=1 00:08:17.970 --rc genhtml_function_coverage=1 00:08:17.970 --rc genhtml_legend=1 00:08:17.970 --rc geninfo_all_blocks=1 00:08:17.970 --rc geninfo_unexecuted_blocks=1 00:08:17.970 00:08:17.970 ' 00:08:17.970 17:54:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.970 --rc genhtml_branch_coverage=1 00:08:17.970 --rc genhtml_function_coverage=1 00:08:17.970 --rc genhtml_legend=1 00:08:17.970 --rc geninfo_all_blocks=1 00:08:17.970 --rc geninfo_unexecuted_blocks=1 00:08:17.970 00:08:17.970 ' 00:08:17.970 17:54:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.970 --rc genhtml_branch_coverage=1 00:08:17.970 --rc genhtml_function_coverage=1 00:08:17.970 --rc genhtml_legend=1 00:08:17.970 --rc geninfo_all_blocks=1 00:08:17.970 --rc geninfo_unexecuted_blocks=1 00:08:17.970 00:08:17.970 ' 00:08:17.970 17:54:34 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:17.970 17:54:34 -- bdev/nbd_common.sh@6 -- # set -e 00:08:17.970 17:54:34 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:17.970 17:54:34 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:17.970 17:54:34 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:17.970 17:54:34 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:17.970 17:54:34 -- bdev/blockdev.sh@18 -- # : 00:08:17.970 17:54:34 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:17.970 17:54:34 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:17.970 17:54:34 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:17.970 17:54:34 -- bdev/blockdev.sh@672 -- # uname -s 00:08:17.970 17:54:34 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:17.970 17:54:34 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:17.970 17:54:34 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:17.970 17:54:34 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:17.970 17:54:34 -- bdev/blockdev.sh@682 -- # dek= 00:08:17.970 17:54:34 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:17.970 17:54:34 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:17.970 17:54:34 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:17.970 17:54:34 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:17.970 17:54:34 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:17.970 17:54:34 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:17.970 17:54:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=72244 00:08:17.970 17:54:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:17.970 17:54:34 -- bdev/blockdev.sh@47 -- # waitforlisten 72244 00:08:17.970 17:54:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:17.970 17:54:34 -- common/autotest_common.sh@829 -- # '[' -z 72244 ']' 00:08:17.970 17:54:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.970 17:54:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.970 17:54:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.970 17:54:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.970 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.229 [2024-11-26 17:54:34.912257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.229 [2024-11-26 17:54:34.912549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72244 ] 00:08:18.229 [2024-11-26 17:54:35.064563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.229 [2024-11-26 17:54:35.111060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.229 [2024-11-26 17:54:35.111440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.166 17:54:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.166 17:54:35 -- common/autotest_common.sh@862 -- # return 0 00:08:19.166 17:54:35 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:19.166 17:54:35 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:19.166 17:54:35 -- bdev/blockdev.sh@79 -- # local json 00:08:19.166 17:54:35 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:19.166 17:54:35 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.166 17:54:35 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:19.166 17:54:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.166 17:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:19.426 17:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.426 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@738 -- # cat 00:08:19.426 17:54:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:19.426 17:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.426 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:19.426 17:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.426 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:19.426 17:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.426 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:19.426 17:54:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:19.426 17:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.426 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 17:54:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:19.426 17:54:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.426 17:54:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:19.426 17:54:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:19.427 17:54:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7b73aa8f-9fe8-49c7-8b77-a1e012c794bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7b73aa8f-9fe8-49c7-8b77-a1e012c794bf",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4aee39de-9802-467f-b9c0-5077b2a93522"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4aee39de-9802-467f-b9c0-5077b2a93522",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "96820321-3ff1-4af8-bc55-11dbaea1dfa0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96820321-3ff1-4af8-bc55-11dbaea1dfa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4ce766c9-df69-4a1d-bb1b-0944ce1a344d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4ce766c9-df69-4a1d-bb1b-0944ce1a344d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "38bbc767-9f81-4979-abdb-c8e311f90b87"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "38bbc767-9f81-4979-abdb-c8e311f90b87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dce71248-5303-47f1-9476-93452924fae7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dce71248-5303-47f1-9476-93452924fae7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:19.427 17:54:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:19.427 17:54:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:19.427 17:54:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:19.427 17:54:36 -- bdev/blockdev.sh@752 -- # killprocess 72244 00:08:19.427 17:54:36 -- common/autotest_common.sh@936 -- # '[' -z 72244 ']' 00:08:19.427 17:54:36 -- common/autotest_common.sh@940 -- # kill -0 72244 00:08:19.427 17:54:36 -- common/autotest_common.sh@941 -- # uname 00:08:19.427 17:54:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.427 17:54:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72244 00:08:19.687 killing process with pid 72244 00:08:19.687 17:54:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.687 17:54:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.687 17:54:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72244' 00:08:19.687 17:54:36 -- common/autotest_common.sh@955 -- # kill 72244 00:08:19.687 17:54:36 -- common/autotest_common.sh@960 -- # wait 72244 00:08:19.946 17:54:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:19.947 17:54:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:19.947 17:54:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:19.947 17:54:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.947 17:54:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.947 ************************************ 00:08:19.947 START TEST bdev_hello_world 00:08:19.947 ************************************ 00:08:19.947 17:54:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:19.947 [2024-11-26 17:54:36.868922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.947 [2024-11-26 17:54:36.869066] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72317 ] 00:08:20.206 [2024-11-26 17:54:37.020100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.206 [2024-11-26 17:54:37.067474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.772 [2024-11-26 17:54:37.444695] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:20.772 [2024-11-26 17:54:37.444775] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:20.772 [2024-11-26 17:54:37.444808] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:20.772 [2024-11-26 17:54:37.447174] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:20.772 [2024-11-26 17:54:37.447945] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:20.772 [2024-11-26 17:54:37.447995] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:20.772 [2024-11-26 17:54:37.448181] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:20.772 00:08:20.772 [2024-11-26 17:54:37.448205] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:20.772 00:08:20.772 real 0m0.894s 00:08:20.772 user 0m0.565s 00:08:20.772 sys 0m0.226s 00:08:20.772 17:54:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.772 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:08:20.772 ************************************ 00:08:20.772 END TEST bdev_hello_world 00:08:20.772 ************************************ 00:08:21.031 17:54:37 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:21.031 17:54:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.031 17:54:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.031 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:08:21.031 ************************************ 00:08:21.031 START TEST bdev_bounds 00:08:21.031 ************************************ 00:08:21.031 17:54:37 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:21.031 17:54:37 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:21.031 17:54:37 -- bdev/blockdev.sh@288 -- # bdevio_pid=72348 00:08:21.031 17:54:37 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.031 17:54:37 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 72348' 00:08:21.031 Process bdevio pid: 72348 00:08:21.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.031 17:54:37 -- bdev/blockdev.sh@291 -- # waitforlisten 72348 00:08:21.031 17:54:37 -- common/autotest_common.sh@829 -- # '[' -z 72348 ']' 00:08:21.031 17:54:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.031 17:54:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.031 17:54:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.031 17:54:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.031 17:54:37 -- common/autotest_common.sh@10 -- # set +x 00:08:21.031 [2024-11-26 17:54:37.841935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.031 [2024-11-26 17:54:37.842266] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72348 ] 00:08:21.291 [2024-11-26 17:54:37.995088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.291 [2024-11-26 17:54:38.044581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.291 [2024-11-26 17:54:38.044759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.291 [2024-11-26 17:54:38.044661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.859 17:54:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.859 17:54:38 -- common/autotest_common.sh@862 -- # return 0 00:08:21.859 17:54:38 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:22.119 I/O targets: 00:08:22.119 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:22.119 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:22.119 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.119 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.119 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.119 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:22.119 00:08:22.119 00:08:22.119 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.119 http://cunit.sourceforge.net/ 00:08:22.119 00:08:22.119 00:08:22.119 Suite: bdevio tests on: Nvme3n1 00:08:22.119 Test: blockdev write read block ...passed 00:08:22.119 Test: blockdev write zeroes read block ...passed 00:08:22.119 Test: blockdev write zeroes read no split ...passed 00:08:22.119 Test: blockdev write zeroes read split ...passed 00:08:22.119 Test: blockdev write zeroes read split partial ...passed 00:08:22.119 Test: blockdev reset ...[2024-11-26 17:54:38.809613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:22.119 passed 00:08:22.119 Test: blockdev write read 8 blocks ...[2024-11-26 17:54:38.811721] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.119 passed 00:08:22.119 Test: blockdev write read size > 128k ...passed 00:08:22.119 Test: blockdev write read invalid size ...passed 00:08:22.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.119 Test: blockdev write read max offset ...passed 00:08:22.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.119 Test: blockdev writev readv 8 blocks ...passed 00:08:22.119 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.119 Test: blockdev writev readv block ...passed 00:08:22.119 Test: blockdev writev readv size > 128k ...passed 00:08:22.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.119 Test: blockdev comparev and writev ...[2024-11-26 17:54:38.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:22.119 Test: blockdev nvme passthru rw ...passed 00:08:22.119 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2aea0e000 len:0x1000 00:08:22.119 [2024-11-26 17:54:38.818545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.119 [2024-11-26 17:54:38.819220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.119 [2024-11-26 17:54:38.819262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:22.119 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:22.119 passed 00:08:22.119 Test: blockdev copy ...passed 00:08:22.119 Suite: bdevio tests on: Nvme2n3 00:08:22.119 Test: blockdev write read block ...passed 00:08:22.119 Test: blockdev write zeroes read block ...passed 00:08:22.119 Test: blockdev write zeroes read no split ...passed 00:08:22.119 Test: blockdev write zeroes read split ...passed 00:08:22.119 Test: blockdev write zeroes read split partial ...passed 00:08:22.119 Test: blockdev reset ...[2024-11-26 17:54:38.836258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.119 passed 00:08:22.119 Test: blockdev write read 8 blocks ...[2024-11-26 17:54:38.838518] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.119 passed 00:08:22.119 Test: blockdev write read size > 128k ...passed 00:08:22.119 Test: blockdev write read invalid size ...passed 00:08:22.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.119 Test: blockdev write read max offset ...passed 00:08:22.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.119 Test: blockdev writev readv 8 blocks ...passed 00:08:22.119 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.119 Test: blockdev writev readv block ...passed 00:08:22.119 Test: blockdev writev readv size > 128k ...passed 00:08:22.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.119 Test: blockdev comparev and writev ...[2024-11-26 17:54:38.845245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea0a000 len:0x1000 00:08:22.119 [2024-11-26 17:54:38.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.119 passed 00:08:22.119 Test: blockdev nvme passthru rw ...passed 00:08:22.119 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:54:38.846260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:22.119 Test: blockdev nvme admin passthru ...passed 00:08:22.119 Test: blockdev copy ... cid:190 PRP1 0x0 PRP2 0x0 00:08:22.119 [2024-11-26 17:54:38.846357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.119 passed 00:08:22.119 Suite: bdevio tests on: Nvme2n2 00:08:22.119 Test: blockdev write read block ...passed 00:08:22.119 Test: blockdev write zeroes read block ...passed 00:08:22.119 Test: blockdev write zeroes read no split ...passed 00:08:22.119 Test: blockdev write zeroes read split ...passed 00:08:22.119 Test: blockdev write zeroes read split partial ...passed 00:08:22.119 Test: blockdev reset ...[2024-11-26 17:54:38.864808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.119 [2024-11-26 17:54:38.867237] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.119 passed 00:08:22.119 Test: blockdev write read 8 blocks ...passed 00:08:22.119 Test: blockdev write read size > 128k ...passed 00:08:22.119 Test: blockdev write read invalid size ...passed 00:08:22.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.119 Test: blockdev write read max offset ...passed 00:08:22.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.119 Test: blockdev writev readv 8 blocks ...passed 00:08:22.119 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.120 Test: blockdev writev readv block ...passed 00:08:22.120 Test: blockdev writev readv size > 128k ...passed 00:08:22.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.120 Test: blockdev comparev and writev ...[2024-11-26 17:54:38.875208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea06000 len:0x1000 00:08:22.120 [2024-11-26 17:54:38.875266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme passthru rw ...passed 00:08:22.120 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:54:38.876106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.120 [2024-11-26 17:54:38.876145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme admin passthru ...passed 00:08:22.120 Test: blockdev copy ...passed 00:08:22.120 Suite: bdevio tests on: Nvme2n1 00:08:22.120 Test: blockdev write read block ...passed 00:08:22.120 Test: blockdev write zeroes read block ...passed 00:08:22.120 Test: blockdev write zeroes read no split ...passed 00:08:22.120 Test: blockdev write zeroes read split ...passed 00:08:22.120 Test: blockdev write zeroes read split partial ...passed 00:08:22.120 Test: blockdev reset ...[2024-11-26 17:54:38.907657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.120 [2024-11-26 17:54:38.909981] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.120 passed 00:08:22.120 Test: blockdev write read 8 blocks ...passed 00:08:22.120 Test: blockdev write read size > 128k ...passed 00:08:22.120 Test: blockdev write read invalid size ...passed 00:08:22.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.120 Test: blockdev write read max offset ...passed 00:08:22.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.120 Test: blockdev writev readv 8 blocks ...passed 00:08:22.120 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.120 Test: blockdev writev readv block ...passed 00:08:22.120 Test: blockdev writev readv size > 128k ...passed 00:08:22.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.120 Test: blockdev comparev and writev ...[2024-11-26 17:54:38.917708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aea02000 len:0x1000 00:08:22.120 [2024-11-26 17:54:38.917766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme passthru rw ...passed 00:08:22.120 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:54:38.918767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.120 [2024-11-26 17:54:38.918805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme admin passthru ...passed 00:08:22.120 Test: blockdev copy ...passed 00:08:22.120 Suite: bdevio tests on: Nvme1n1 00:08:22.120 Test: blockdev write read block ...passed 00:08:22.120 Test: blockdev write zeroes read block ...passed 00:08:22.120 Test: blockdev write zeroes read no split ...passed 00:08:22.120 Test: blockdev write zeroes read split ...passed 00:08:22.120 Test: blockdev write zeroes read split partial ...passed 00:08:22.120 Test: blockdev reset ...[2024-11-26 17:54:38.949972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:22.120 passed 00:08:22.120 Test: blockdev write read 8 blocks ...[2024-11-26 17:54:38.952073] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.120 passed 00:08:22.120 Test: blockdev write read size > 128k ...passed 00:08:22.120 Test: blockdev write read invalid size ...passed 00:08:22.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.120 Test: blockdev write read max offset ...passed 00:08:22.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.120 Test: blockdev writev readv 8 blocks ...passed 00:08:22.120 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.120 Test: blockdev writev readv block ...passed 00:08:22.120 Test: blockdev writev readv size > 128k ...passed 00:08:22.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.120 Test: blockdev comparev and writev ...[2024-11-26 17:54:38.959545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be60e000 len:0x1000 00:08:22.120 [2024-11-26 17:54:38.959603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme passthru rw ...passed 00:08:22.120 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:54:38.960592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.120 [2024-11-26 17:54:38.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme admin passthru ...passed 00:08:22.120 Test: blockdev copy ...passed 00:08:22.120 Suite: bdevio tests on: Nvme0n1 00:08:22.120 Test: blockdev write read block ...passed 00:08:22.120 Test: blockdev write zeroes read block ...passed 00:08:22.120 Test: blockdev write zeroes read no split ...passed 00:08:22.120 Test: blockdev write zeroes read split ...passed 00:08:22.120 Test: blockdev write zeroes read split partial ...passed 00:08:22.120 Test: blockdev reset ...[2024-11-26 17:54:38.994608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:22.120 [2024-11-26 17:54:38.996652] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.120 passed 00:08:22.120 Test: blockdev write read 8 blocks ...passed 00:08:22.120 Test: blockdev write read size > 128k ...passed 00:08:22.120 Test: blockdev write read invalid size ...passed 00:08:22.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.120 Test: blockdev write read max offset ...passed 00:08:22.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.120 Test: blockdev writev readv 8 blocks ...passed 00:08:22.120 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.120 Test: blockdev writev readv block ...passed 00:08:22.120 Test: blockdev writev readv size > 128k ...passed 00:08:22.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.120 Test: blockdev comparev and writev ...passed 00:08:22.120 Test: blockdev nvme passthru rw ...[2024-11-26 17:54:39.004376] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:22.120 separate metadata which is not supported yet. 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:54:39.005078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:22.120 [2024-11-26 17:54:39.005124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:22.120 passed 00:08:22.120 Test: blockdev nvme admin passthru ...passed 00:08:22.120 Test: blockdev copy ...passed 00:08:22.120 00:08:22.120 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.120 suites 6 6 n/a 0 0 00:08:22.120 tests 138 138 138 0 0 00:08:22.120 asserts 893 893 893 0 n/a 00:08:22.120 00:08:22.120 Elapsed time = 0.514 seconds 00:08:22.120 0 00:08:22.120 17:54:39 -- bdev/blockdev.sh@293 -- # killprocess 72348 00:08:22.120 17:54:39 -- common/autotest_common.sh@936 -- # '[' -z 72348 ']' 00:08:22.120 17:54:39 -- common/autotest_common.sh@940 -- # kill -0 72348 00:08:22.386 17:54:39 -- common/autotest_common.sh@941 -- # uname 00:08:22.386 17:54:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.386 17:54:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72348 00:08:22.386 killing process with pid 72348 00:08:22.386 17:54:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:22.386 17:54:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:22.386 17:54:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72348' 00:08:22.386 17:54:39 -- common/autotest_common.sh@955 -- # kill 72348 00:08:22.386 17:54:39 -- common/autotest_common.sh@960 -- # wait 72348 00:08:22.386 17:54:39 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:22.386 00:08:22.386 real 0m1.531s 00:08:22.386 user 0m3.711s 00:08:22.386 sys 0m0.391s 00:08:22.386 17:54:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.386 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.386 ************************************ 00:08:22.386 END TEST bdev_bounds 00:08:22.386 ************************************ 00:08:22.648 17:54:39 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.648 17:54:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:22.648 17:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.648 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.648 ************************************ 00:08:22.648 START TEST bdev_nbd 00:08:22.648 ************************************ 00:08:22.648 17:54:39 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.648 17:54:39 -- bdev/blockdev.sh@298 -- # uname -s 00:08:22.648 17:54:39 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:22.648 17:54:39 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.648 17:54:39 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.648 17:54:39 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.648 17:54:39 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:22.648 17:54:39 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:08:22.648 17:54:39 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:22.648 17:54:39 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:22.648 17:54:39 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:22.648 17:54:39 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:08:22.648 17:54:39 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:22.648 17:54:39 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:22.648 17:54:39 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.648 17:54:39 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:22.648 17:54:39 -- bdev/blockdev.sh@316 -- # nbd_pid=72402 00:08:22.648 17:54:39 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:22.648 17:54:39 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:22.648 17:54:39 -- bdev/blockdev.sh@318 -- # waitforlisten 72402 /var/tmp/spdk-nbd.sock 00:08:22.648 17:54:39 -- common/autotest_common.sh@829 -- # '[' -z 72402 ']' 00:08:22.648 17:54:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:22.648 17:54:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:22.648 17:54:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:22.648 17:54:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.648 17:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.648 [2024-11-26 17:54:39.458002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.648 [2024-11-26 17:54:39.458139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.906 [2024-11-26 17:54:39.609556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.906 [2024-11-26 17:54:39.656115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.473 17:54:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.473 17:54:40 -- common/autotest_common.sh@862 -- # return 0 00:08:23.473 17:54:40 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@24 -- # local i 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.473 17:54:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:23.731 17:54:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:23.731 17:54:40 -- common/autotest_common.sh@867 -- # local i 00:08:23.731 17:54:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.731 17:54:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.731 17:54:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:23.731 17:54:40 -- common/autotest_common.sh@871 -- # break 00:08:23.731 17:54:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.731 17:54:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.731 17:54:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.731 1+0 records in 00:08:23.731 1+0 records out 00:08:23.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537747 s, 7.6 MB/s 00:08:23.731 17:54:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.731 17:54:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:23.731 17:54:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.731 17:54:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.731 17:54:40 -- common/autotest_common.sh@887 -- # return 0 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.731 17:54:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:23.991 17:54:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:23.991 17:54:40 -- common/autotest_common.sh@867 -- # local i 00:08:23.991 17:54:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:23.991 17:54:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:23.991 17:54:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:23.991 17:54:40 -- common/autotest_common.sh@871 -- # break 00:08:23.991 17:54:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:23.991 17:54:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:23.991 17:54:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.991 1+0 records in 00:08:23.991 1+0 records out 00:08:23.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782402 s, 5.2 MB/s 00:08:23.991 17:54:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.991 17:54:40 -- common/autotest_common.sh@884 -- # size=4096 00:08:23.991 17:54:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.991 17:54:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:23.991 17:54:40 -- common/autotest_common.sh@887 -- # return 0 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.991 17:54:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:24.250 17:54:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:24.250 17:54:41 -- common/autotest_common.sh@867 -- # local i 00:08:24.250 17:54:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.250 17:54:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.250 17:54:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:24.250 17:54:41 -- common/autotest_common.sh@871 -- # break 00:08:24.250 17:54:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.250 17:54:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.250 17:54:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.250 1+0 records in 00:08:24.250 1+0 records out 00:08:24.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678157 s, 6.0 MB/s 00:08:24.250 17:54:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.250 17:54:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.250 17:54:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.250 17:54:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.250 17:54:41 -- common/autotest_common.sh@887 -- # return 0 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.250 17:54:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:24.535 17:54:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:24.535 17:54:41 -- common/autotest_common.sh@867 -- # local i 00:08:24.535 17:54:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.535 17:54:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.535 17:54:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:24.535 17:54:41 -- common/autotest_common.sh@871 -- # break 00:08:24.535 17:54:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.535 17:54:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.535 17:54:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.535 1+0 records in 00:08:24.535 1+0 records out 00:08:24.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771449 s, 5.3 MB/s 00:08:24.535 17:54:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.535 17:54:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.535 17:54:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.535 17:54:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.535 17:54:41 -- common/autotest_common.sh@887 -- # return 0 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.535 17:54:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:24.818 17:54:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:24.818 17:54:41 -- common/autotest_common.sh@867 -- # local i 00:08:24.818 17:54:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.818 17:54:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.818 17:54:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:24.818 17:54:41 -- common/autotest_common.sh@871 -- # break 00:08:24.818 17:54:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.818 17:54:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.818 17:54:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.818 1+0 records in 00:08:24.818 1+0 records out 00:08:24.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641171 s, 6.4 MB/s 00:08:24.818 17:54:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.818 17:54:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.818 17:54:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.818 17:54:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.818 17:54:41 -- common/autotest_common.sh@887 -- # return 0 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.818 17:54:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:25.077 17:54:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:25.077 17:54:41 -- common/autotest_common.sh@867 -- # local i 00:08:25.077 17:54:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.077 17:54:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.077 17:54:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:25.077 17:54:41 -- common/autotest_common.sh@871 -- # break 00:08:25.077 17:54:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.077 17:54:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.077 17:54:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.077 1+0 records in 00:08:25.077 1+0 records out 00:08:25.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694559 s, 5.9 MB/s 00:08:25.077 17:54:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.077 17:54:41 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.077 17:54:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.077 17:54:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.077 17:54:41 -- common/autotest_common.sh@887 -- # return 0 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:25.077 17:54:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd0", 00:08:25.336 "bdev_name": "Nvme0n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd1", 00:08:25.336 "bdev_name": "Nvme1n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd2", 00:08:25.336 "bdev_name": "Nvme2n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd3", 00:08:25.336 "bdev_name": "Nvme2n2" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd4", 00:08:25.336 "bdev_name": "Nvme2n3" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd5", 00:08:25.336 "bdev_name": "Nvme3n1" 00:08:25.336 } 00:08:25.336 ]' 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd0", 00:08:25.336 "bdev_name": "Nvme0n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd1", 00:08:25.336 "bdev_name": "Nvme1n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd2", 00:08:25.336 "bdev_name": "Nvme2n1" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd3", 00:08:25.336 "bdev_name": "Nvme2n2" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd4", 00:08:25.336 "bdev_name": "Nvme2n3" 00:08:25.336 }, 00:08:25.336 { 00:08:25.336 "nbd_device": "/dev/nbd5", 00:08:25.336 "bdev_name": "Nvme3n1" 00:08:25.336 } 00:08:25.336 ]' 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.336 17:54:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@41 -- # break 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.595 17:54:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@41 -- # break 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.855 17:54:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@41 -- # break 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.114 17:54:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@41 -- # break 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.114 17:54:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@41 -- # break 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.373 17:54:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@41 -- # break 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.631 17:54:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@65 -- # true 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@122 -- # count=0 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@127 -- # return 0 00:08:26.890 17:54:43 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@12 -- # local i 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:26.890 17:54:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:26.891 17:54:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:27.150 /dev/nbd0 00:08:27.150 17:54:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.150 17:54:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.150 17:54:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:27.150 17:54:43 -- common/autotest_common.sh@867 -- # local i 00:08:27.150 17:54:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.150 17:54:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.150 17:54:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:27.150 17:54:43 -- common/autotest_common.sh@871 -- # break 00:08:27.150 17:54:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.150 17:54:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.150 17:54:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.150 1+0 records in 00:08:27.150 1+0 records out 00:08:27.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653438 s, 6.3 MB/s 00:08:27.150 17:54:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.150 17:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.150 17:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.150 17:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.150 17:54:44 -- common/autotest_common.sh@887 -- # return 0 00:08:27.150 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.150 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.150 17:54:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:27.410 /dev/nbd1 00:08:27.410 17:54:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.410 17:54:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.410 17:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:27.410 17:54:44 -- common/autotest_common.sh@867 -- # local i 00:08:27.410 17:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.410 17:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.410 17:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:27.410 17:54:44 -- common/autotest_common.sh@871 -- # break 00:08:27.410 17:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.410 17:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.410 17:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.410 1+0 records in 00:08:27.410 1+0 records out 00:08:27.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00276923 s, 1.5 MB/s 00:08:27.410 17:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.410 17:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.410 17:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.410 17:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.410 17:54:44 -- common/autotest_common.sh@887 -- # return 0 00:08:27.410 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.410 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.410 17:54:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:27.669 /dev/nbd10 00:08:27.669 17:54:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:27.669 17:54:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:27.669 17:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:27.669 17:54:44 -- common/autotest_common.sh@867 -- # local i 00:08:27.669 17:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.669 17:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.669 17:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:27.669 17:54:44 -- common/autotest_common.sh@871 -- # break 00:08:27.669 17:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.669 17:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.669 17:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.669 1+0 records in 00:08:27.669 1+0 records out 00:08:27.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784871 s, 5.2 MB/s 00:08:27.669 17:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.669 17:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.669 17:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.669 17:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.669 17:54:44 -- common/autotest_common.sh@887 -- # return 0 00:08:27.669 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.669 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.669 17:54:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:27.928 /dev/nbd11 00:08:27.928 17:54:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:27.928 17:54:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:27.928 17:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:27.928 17:54:44 -- common/autotest_common.sh@867 -- # local i 00:08:27.928 17:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.928 17:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.928 17:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:27.928 17:54:44 -- common/autotest_common.sh@871 -- # break 00:08:27.928 17:54:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.928 17:54:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.928 17:54:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.928 1+0 records in 00:08:27.928 1+0 records out 00:08:27.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734903 s, 5.6 MB/s 00:08:27.928 17:54:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.928 17:54:44 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.928 17:54:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.928 17:54:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.928 17:54:44 -- common/autotest_common.sh@887 -- # return 0 00:08:27.928 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.928 17:54:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.928 17:54:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:28.186 /dev/nbd12 00:08:28.186 17:54:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.186 17:54:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.186 17:54:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:28.186 17:54:44 -- common/autotest_common.sh@867 -- # local i 00:08:28.186 17:54:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.186 17:54:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.186 17:54:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:28.187 17:54:45 -- common/autotest_common.sh@871 -- # break 00:08:28.187 17:54:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.187 17:54:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.187 17:54:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.187 1+0 records in 00:08:28.187 1+0 records out 00:08:28.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881962 s, 4.6 MB/s 00:08:28.187 17:54:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.187 17:54:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.187 17:54:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.187 17:54:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.187 17:54:45 -- common/autotest_common.sh@887 -- # return 0 00:08:28.187 17:54:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.187 17:54:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.187 17:54:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:28.446 /dev/nbd13 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.446 17:54:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:28.446 17:54:45 -- common/autotest_common.sh@867 -- # local i 00:08:28.446 17:54:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.446 17:54:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.446 17:54:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:28.446 17:54:45 -- common/autotest_common.sh@871 -- # break 00:08:28.446 17:54:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.446 17:54:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.446 17:54:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.446 1+0 records in 00:08:28.446 1+0 records out 00:08:28.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112093 s, 3.7 MB/s 00:08:28.446 17:54:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.446 17:54:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.446 17:54:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.446 17:54:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.446 17:54:45 -- common/autotest_common.sh@887 -- # return 0 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.446 17:54:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd0", 00:08:28.704 "bdev_name": "Nvme0n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd1", 00:08:28.704 "bdev_name": "Nvme1n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd10", 00:08:28.704 "bdev_name": "Nvme2n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd11", 00:08:28.704 "bdev_name": "Nvme2n2" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd12", 00:08:28.704 "bdev_name": "Nvme2n3" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd13", 00:08:28.704 "bdev_name": "Nvme3n1" 00:08:28.704 } 00:08:28.704 ]' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd0", 00:08:28.704 "bdev_name": "Nvme0n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd1", 00:08:28.704 "bdev_name": "Nvme1n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd10", 00:08:28.704 "bdev_name": "Nvme2n1" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd11", 00:08:28.704 "bdev_name": "Nvme2n2" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd12", 00:08:28.704 "bdev_name": "Nvme2n3" 00:08:28.704 }, 00:08:28.704 { 00:08:28.704 "nbd_device": "/dev/nbd13", 00:08:28.704 "bdev_name": "Nvme3n1" 00:08:28.704 } 00:08:28.704 ]' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.704 /dev/nbd1 00:08:28.704 /dev/nbd10 00:08:28.704 /dev/nbd11 00:08:28.704 /dev/nbd12 00:08:28.704 /dev/nbd13' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.704 /dev/nbd1 00:08:28.704 /dev/nbd10 00:08:28.704 /dev/nbd11 00:08:28.704 /dev/nbd12 00:08:28.704 /dev/nbd13' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@65 -- # count=6 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@66 -- # echo 6 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@95 -- # count=6 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:28.704 256+0 records in 00:08:28.704 256+0 records out 00:08:28.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122205 s, 85.8 MB/s 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.704 17:54:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:28.962 256+0 records in 00:08:28.962 256+0 records out 00:08:28.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129577 s, 8.1 MB/s 00:08:28.962 17:54:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.962 17:54:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:28.962 256+0 records in 00:08:28.962 256+0 records out 00:08:28.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132807 s, 7.9 MB/s 00:08:28.962 17:54:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.962 17:54:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.220 256+0 records in 00:08:29.220 256+0 records out 00:08:29.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1321 s, 7.9 MB/s 00:08:29.220 17:54:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.220 17:54:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.220 256+0 records in 00:08:29.220 256+0 records out 00:08:29.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129407 s, 8.1 MB/s 00:08:29.220 17:54:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.220 17:54:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:29.479 256+0 records in 00:08:29.479 256+0 records out 00:08:29.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125158 s, 8.4 MB/s 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:29.479 256+0 records in 00:08:29.479 256+0 records out 00:08:29.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13114 s, 8.0 MB/s 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.479 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@51 -- # local i 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.737 17:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@41 -- # break 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.995 17:54:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.253 17:54:46 -- bdev/nbd_common.sh@41 -- # break 00:08:30.253 17:54:46 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.253 17:54:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.253 17:54:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@41 -- # break 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.253 17:54:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@41 -- # break 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.511 17:54:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@41 -- # break 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.770 17:54:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@41 -- # break 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.029 17:54:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@65 -- # true 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.288 17:54:48 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:31.288 17:54:48 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.547 malloc_lvol_verify 00:08:31.547 17:54:48 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.547 ffc1ab2e-20cf-4931-82a7-f71c239b143d 00:08:31.805 17:54:48 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:31.805 0de6be80-bf0a-49f6-a3d5-7d48799ca0a7 00:08:31.805 17:54:48 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.067 /dev/nbd0 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:32.067 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.067 Discarding device blocks: 0/4096 done 00:08:32.067 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.067 00:08:32.067 Allocating group tables: 0/1 done 00:08:32.067 Writing inode tables: 0/1 done 00:08:32.067 Creating journal (1024 blocks): done 00:08:32.067 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.067 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.067 17:54:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@41 -- # break 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:32.346 17:54:49 -- bdev/nbd_common.sh@147 -- # return 0 00:08:32.346 17:54:49 -- bdev/blockdev.sh@324 -- # killprocess 72402 00:08:32.346 17:54:49 -- common/autotest_common.sh@936 -- # '[' -z 72402 ']' 00:08:32.346 17:54:49 -- common/autotest_common.sh@940 -- # kill -0 72402 00:08:32.346 17:54:49 -- common/autotest_common.sh@941 -- # uname 00:08:32.346 17:54:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.346 17:54:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72402 00:08:32.346 17:54:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.346 killing process with pid 72402 00:08:32.346 17:54:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.346 17:54:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72402' 00:08:32.346 17:54:49 -- common/autotest_common.sh@955 -- # kill 72402 00:08:32.346 17:54:49 -- common/autotest_common.sh@960 -- # wait 72402 00:08:32.603 17:54:49 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:32.603 00:08:32.603 real 0m10.060s 00:08:32.603 user 0m13.406s 00:08:32.603 sys 0m4.571s 00:08:32.603 17:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.603 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.603 ************************************ 00:08:32.603 END TEST bdev_nbd 00:08:32.603 ************************************ 00:08:32.603 17:54:49 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:32.603 17:54:49 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:32.603 skipping fio tests on NVMe due to multi-ns failures. 00:08:32.603 17:54:49 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:32.603 17:54:49 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:32.603 17:54:49 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.603 17:54:49 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:32.603 17:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.603 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.603 ************************************ 00:08:32.603 START TEST bdev_verify 00:08:32.603 ************************************ 00:08:32.603 17:54:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.860 [2024-11-26 17:54:49.582935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.860 [2024-11-26 17:54:49.583088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:08:32.860 [2024-11-26 17:54:49.731331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.860 [2024-11-26 17:54:49.780567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.860 [2024-11-26 17:54:49.780679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.427 Running I/O for 5 seconds... 00:08:38.700 00:08:38.700 Latency(us) 00:08:38.700 [2024-11-26T17:54:55.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.700 [2024-11-26T17:54:55.626Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.700 Verification LBA range: start 0x0 length 0xbd0bd 00:08:38.700 Nvme0n1 : 5.05 2687.71 10.50 0.00 0.00 47458.45 9317.17 56429.39 00:08:38.700 [2024-11-26T17:54:55.626Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.700 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:38.700 Nvme0n1 : 5.06 2037.20 7.96 0.00 0.00 62658.08 7001.03 62325.00 00:08:38.700 [2024-11-26T17:54:55.626Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.700 Verification LBA range: start 0x0 length 0xa0000 00:08:38.700 Nvme1n1 : 5.05 2687.00 10.50 0.00 0.00 47440.03 9211.89 54744.93 00:08:38.700 [2024-11-26T17:54:55.626Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.700 Verification LBA range: start 0xa0000 length 0xa0000 00:08:38.700 Nvme1n1 : 5.06 2035.58 7.95 0.00 0.00 62642.33 10001.48 61482.77 00:08:38.700 [2024-11-26T17:54:55.626Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.700 Verification LBA range: start 0x0 length 0x80000 00:08:38.701 Nvme2n1 : 5.05 2692.54 10.52 0.00 0.00 47253.93 3868.99 49481.00 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x80000 length 0x80000 00:08:38.701 Nvme2n1 : 5.06 2034.91 7.95 0.00 0.00 62466.24 10527.87 61903.88 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x0 length 0x80000 00:08:38.701 Nvme2n2 : 5.06 2691.99 10.52 0.00 0.00 47194.64 3658.44 50323.23 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x80000 length 0x80000 00:08:38.701 Nvme2n2 : 5.06 2034.32 7.95 0.00 0.00 62429.79 10422.59 62325.00 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x0 length 0x80000 00:08:38.701 Nvme2n3 : 5.06 2696.91 10.53 0.00 0.00 47079.51 2684.61 50323.23 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x80000 length 0x80000 00:08:38.701 Nvme2n3 : 5.07 2032.36 7.94 0.00 0.00 62410.77 13212.48 63167.23 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x0 length 0x20000 00:08:38.701 Nvme3n1 : 5.07 2695.55 10.53 0.00 0.00 47052.53 4763.86 50323.23 00:08:38.701 [2024-11-26T17:54:55.627Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.701 Verification LBA range: start 0x20000 length 0x20000 00:08:38.701 Nvme3n1 : 5.07 2031.94 7.94 0.00 0.00 62382.56 12475.53 63588.34 00:08:38.701 [2024-11-26T17:54:55.627Z] =================================================================================================================== 00:08:38.701 [2024-11-26T17:54:55.627Z] Total : 28358.00 110.77 0.00 0.00 53816.20 2684.61 63588.34 00:08:46.865 00:08:46.865 real 0m13.045s 00:08:46.865 user 0m25.089s 00:08:46.865 sys 0m0.378s 00:08:46.865 17:55:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.865 17:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:46.865 ************************************ 00:08:46.865 END TEST bdev_verify 00:08:46.865 ************************************ 00:08:46.865 17:55:02 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.865 17:55:02 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:46.865 17:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.865 17:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:46.865 ************************************ 00:08:46.865 START TEST bdev_verify_big_io 00:08:46.865 ************************************ 00:08:46.865 17:55:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:46.865 [2024-11-26 17:55:02.708009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:46.865 [2024-11-26 17:55:02.708151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72903 ] 00:08:46.865 [2024-11-26 17:55:02.861562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.865 [2024-11-26 17:55:02.902198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.865 [2024-11-26 17:55:02.902324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.865 Running I/O for 5 seconds... 00:08:52.150 00:08:52.150 Latency(us) 00:08:52.150 [2024-11-26T17:55:09.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0xbd0b 00:08:52.150 Nvme0n1 : 5.36 333.60 20.85 0.00 0.00 374619.31 39795.35 774851.34 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:52.150 Nvme0n1 : 5.32 343.71 21.48 0.00 0.00 366436.62 6685.20 409323.64 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0xa000 00:08:52.150 Nvme1n1 : 5.37 339.91 21.24 0.00 0.00 362791.92 15686.53 687259.45 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0xa000 length 0xa000 00:08:52.150 Nvme1n1 : 5.32 351.61 21.98 0.00 0.00 359168.11 5237.62 405954.72 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0x8000 00:08:52.150 Nvme2n1 : 5.39 347.34 21.71 0.00 0.00 348843.62 12212.33 596298.64 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x8000 length 0x8000 00:08:52.150 Nvme2n1 : 5.33 351.50 21.97 0.00 0.00 356569.92 5895.61 411008.10 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0x8000 00:08:52.150 Nvme2n2 : 5.39 347.23 21.70 0.00 0.00 343599.51 13054.56 528920.26 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x8000 length 0x8000 00:08:52.150 Nvme2n2 : 5.33 351.38 21.96 0.00 0.00 353986.24 6343.04 414377.02 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0x8000 00:08:52.150 Nvme2n3 : 5.43 365.66 22.85 0.00 0.00 322189.90 10580.51 478386.48 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x8000 length 0x8000 00:08:52.150 Nvme2n3 : 5.33 351.26 21.95 0.00 0.00 351390.78 7001.03 421114.86 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x0 length 0x2000 00:08:52.150 Nvme3n1 : 5.47 420.01 26.25 0.00 0.00 276980.13 523.10 437959.45 00:08:52.150 [2024-11-26T17:55:09.076Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.150 Verification LBA range: start 0x2000 length 0x2000 00:08:52.150 Nvme3n1 : 5.33 351.14 21.95 0.00 0.00 348782.66 7843.26 417745.94 00:08:52.150 [2024-11-26T17:55:09.076Z] =================================================================================================================== 00:08:52.150 [2024-11-26T17:55:09.076Z] Total : 4254.38 265.90 0.00 0.00 345515.67 523.10 774851.34 00:08:52.722 00:08:52.722 real 0m6.836s 00:08:52.722 user 0m12.786s 00:08:52.722 sys 0m0.296s 00:08:52.722 17:55:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.722 17:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.722 ************************************ 00:08:52.722 END TEST bdev_verify_big_io 00:08:52.722 ************************************ 00:08:52.722 17:55:09 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.722 17:55:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:52.722 17:55:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.722 17:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.722 ************************************ 00:08:52.722 START TEST bdev_write_zeroes 00:08:52.722 ************************************ 00:08:52.722 17:55:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.722 [2024-11-26 17:55:09.612009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:52.722 [2024-11-26 17:55:09.612169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72996 ] 00:08:52.982 [2024-11-26 17:55:09.762151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.982 [2024-11-26 17:55:09.802172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.550 Running I/O for 1 seconds... 00:08:54.484 00:08:54.484 Latency(us) 00:08:54.484 [2024-11-26T17:55:11.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme0n1 : 1.01 13441.28 52.50 0.00 0.00 9497.77 7422.15 32215.29 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme1n1 : 1.01 13427.53 52.45 0.00 0.00 9497.21 7843.26 32215.29 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme2n1 : 1.01 13435.32 52.48 0.00 0.00 9455.39 7369.51 28846.37 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme2n2 : 1.02 13447.99 52.53 0.00 0.00 9411.00 6158.80 28214.70 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme2n3 : 1.02 13435.00 52.48 0.00 0.00 9389.43 6211.44 26530.24 00:08:54.484 [2024-11-26T17:55:11.410Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:54.484 Nvme3n1 : 1.02 13467.59 52.61 0.00 0.00 9336.32 4526.98 22845.48 00:08:54.484 [2024-11-26T17:55:11.410Z] =================================================================================================================== 00:08:54.484 [2024-11-26T17:55:11.410Z] Total : 80654.71 315.06 0.00 0.00 9430.89 4526.98 32215.29 00:08:54.743 00:08:54.743 real 0m1.936s 00:08:54.743 user 0m1.610s 00:08:54.743 sys 0m0.216s 00:08:54.743 17:55:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.743 ************************************ 00:08:54.743 END TEST bdev_write_zeroes 00:08:54.743 17:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.743 ************************************ 00:08:54.743 17:55:11 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.743 17:55:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:54.743 17:55:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.743 17:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.743 ************************************ 00:08:54.743 START TEST bdev_json_nonenclosed 00:08:54.743 ************************************ 00:08:54.743 17:55:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.743 [2024-11-26 17:55:11.619550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:54.743 [2024-11-26 17:55:11.619690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73038 ] 00:08:55.002 [2024-11-26 17:55:11.769917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.002 [2024-11-26 17:55:11.815108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.002 [2024-11-26 17:55:11.815319] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:55.002 [2024-11-26 17:55:11.815352] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.262 00:08:55.262 real 0m0.392s 00:08:55.262 user 0m0.161s 00:08:55.262 sys 0m0.126s 00:08:55.262 17:55:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.262 17:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.262 ************************************ 00:08:55.262 END TEST bdev_json_nonenclosed 00:08:55.262 ************************************ 00:08:55.262 17:55:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.262 17:55:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:55.262 17:55:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.262 17:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.262 ************************************ 00:08:55.262 START TEST bdev_json_nonarray 00:08:55.262 ************************************ 00:08:55.262 17:55:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.262 [2024-11-26 17:55:12.084310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:55.262 [2024-11-26 17:55:12.084447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73063 ] 00:08:55.521 [2024-11-26 17:55:12.235315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.521 [2024-11-26 17:55:12.281028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.521 [2024-11-26 17:55:12.281261] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:55.521 [2024-11-26 17:55:12.281294] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.521 00:08:55.521 real 0m0.395s 00:08:55.521 user 0m0.171s 00:08:55.521 sys 0m0.121s 00:08:55.521 17:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.521 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 ************************************ 00:08:55.521 END TEST bdev_json_nonarray 00:08:55.521 ************************************ 00:08:55.782 17:55:12 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:55.782 17:55:12 -- bdev/blockdev.sh@809 -- # cleanup 00:08:55.782 17:55:12 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:55.782 17:55:12 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.782 17:55:12 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:55.782 17:55:12 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:55.782 00:08:55.782 real 0m37.894s 00:08:55.782 user 0m59.910s 00:08:55.782 sys 0m7.421s 00:08:55.782 17:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.782 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:55.782 ************************************ 00:08:55.782 END TEST blockdev_nvme 00:08:55.782 ************************************ 00:08:55.782 17:55:12 -- spdk/autotest.sh@206 -- # uname -s 00:08:55.782 17:55:12 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:55.782 17:55:12 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:55.782 17:55:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.782 17:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.782 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:55.782 ************************************ 00:08:55.782 START TEST blockdev_nvme_gpt 00:08:55.782 ************************************ 00:08:55.782 17:55:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:55.782 * Looking for test storage... 00:08:55.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:55.782 17:55:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:55.782 17:55:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:55.782 17:55:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.040 17:55:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.040 17:55:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.040 17:55:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.040 17:55:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.040 17:55:12 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.040 17:55:12 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.040 17:55:12 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.040 17:55:12 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.040 17:55:12 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.040 17:55:12 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.040 17:55:12 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.040 17:55:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.040 17:55:12 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.040 17:55:12 -- scripts/common.sh@344 -- # : 1 00:08:56.040 17:55:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.040 17:55:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.040 17:55:12 -- scripts/common.sh@364 -- # decimal 1 00:08:56.040 17:55:12 -- scripts/common.sh@352 -- # local d=1 00:08:56.040 17:55:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.040 17:55:12 -- scripts/common.sh@354 -- # echo 1 00:08:56.040 17:55:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.040 17:55:12 -- scripts/common.sh@365 -- # decimal 2 00:08:56.040 17:55:12 -- scripts/common.sh@352 -- # local d=2 00:08:56.040 17:55:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.040 17:55:12 -- scripts/common.sh@354 -- # echo 2 00:08:56.040 17:55:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.040 17:55:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.040 17:55:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.040 17:55:12 -- scripts/common.sh@367 -- # return 0 00:08:56.040 17:55:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.041 17:55:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.041 --rc genhtml_branch_coverage=1 00:08:56.041 --rc genhtml_function_coverage=1 00:08:56.041 --rc genhtml_legend=1 00:08:56.041 --rc geninfo_all_blocks=1 00:08:56.041 --rc geninfo_unexecuted_blocks=1 00:08:56.041 00:08:56.041 ' 00:08:56.041 17:55:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.041 --rc genhtml_branch_coverage=1 00:08:56.041 --rc genhtml_function_coverage=1 00:08:56.041 --rc genhtml_legend=1 00:08:56.041 --rc geninfo_all_blocks=1 00:08:56.041 --rc geninfo_unexecuted_blocks=1 00:08:56.041 00:08:56.041 ' 00:08:56.041 17:55:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.041 --rc genhtml_branch_coverage=1 00:08:56.041 --rc genhtml_function_coverage=1 00:08:56.041 --rc genhtml_legend=1 00:08:56.041 --rc geninfo_all_blocks=1 00:08:56.041 --rc geninfo_unexecuted_blocks=1 00:08:56.041 00:08:56.041 ' 00:08:56.041 17:55:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.041 --rc genhtml_branch_coverage=1 00:08:56.041 --rc genhtml_function_coverage=1 00:08:56.041 --rc genhtml_legend=1 00:08:56.041 --rc geninfo_all_blocks=1 00:08:56.041 --rc geninfo_unexecuted_blocks=1 00:08:56.041 00:08:56.041 ' 00:08:56.041 17:55:12 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:56.041 17:55:12 -- bdev/nbd_common.sh@6 -- # set -e 00:08:56.041 17:55:12 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:56.041 17:55:12 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.041 17:55:12 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:56.041 17:55:12 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:56.041 17:55:12 -- bdev/blockdev.sh@18 -- # : 00:08:56.041 17:55:12 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:56.041 17:55:12 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:56.041 17:55:12 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:56.041 17:55:12 -- bdev/blockdev.sh@672 -- # uname -s 00:08:56.041 17:55:12 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:56.041 17:55:12 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:56.041 17:55:12 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:56.041 17:55:12 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:56.041 17:55:12 -- bdev/blockdev.sh@682 -- # dek= 00:08:56.041 17:55:12 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:56.041 17:55:12 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:56.041 17:55:12 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:56.041 17:55:12 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:56.041 17:55:12 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:56.041 17:55:12 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:56.041 17:55:12 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=73141 00:08:56.041 17:55:12 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:56.041 17:55:12 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:56.041 17:55:12 -- bdev/blockdev.sh@47 -- # waitforlisten 73141 00:08:56.041 17:55:12 -- common/autotest_common.sh@829 -- # '[' -z 73141 ']' 00:08:56.041 17:55:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.041 17:55:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.041 17:55:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.041 17:55:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.041 17:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.041 [2024-11-26 17:55:12.879107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:56.041 [2024-11-26 17:55:12.879236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73141 ] 00:08:56.300 [2024-11-26 17:55:13.029183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.300 [2024-11-26 17:55:13.074570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.300 [2024-11-26 17:55:13.074758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.866 17:55:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.866 17:55:13 -- common/autotest_common.sh@862 -- # return 0 00:08:56.866 17:55:13 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:56.866 17:55:13 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:56.866 17:55:13 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:57.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.848 Waiting for block devices as requested 00:08:57.848 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.848 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.105 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.105 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.375 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:03.375 17:55:20 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:03.375 17:55:20 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:09:03.375 17:55:20 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:09:03.375 17:55:20 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:09:03.375 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.375 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:09:03.375 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:09:03.375 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:09:03.375 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.375 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.375 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:09:03.375 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:09:03.375 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:03.375 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.376 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.376 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:09:03.376 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:09:03.376 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.376 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:09:03.376 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:09:03.376 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.376 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:09:03.376 17:55:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:09:03.376 17:55:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:03.376 17:55:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:09:03.376 17:55:20 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:09:03.376 17:55:20 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:03.376 17:55:20 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:03.376 17:55:20 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:03.376 17:55:20 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:03.376 17:55:20 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:09:03.376 17:55:20 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:09:03.376 17:55:20 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:09:03.376 BYT; 00:09:03.376 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:03.376 17:55:20 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:09:03.376 BYT; 00:09:03.376 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:03.376 17:55:20 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:09:03.376 17:55:20 -- bdev/blockdev.sh@114 -- # break 00:09:03.376 17:55:20 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:09:03.376 17:55:20 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:03.376 17:55:20 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:03.376 17:55:20 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:03.376 17:55:20 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:03.376 17:55:20 -- scripts/common.sh@410 -- # local spdk_guid 00:09:03.376 17:55:20 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:03.376 17:55:20 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:03.376 17:55:20 -- scripts/common.sh@415 -- # IFS='()' 00:09:03.376 17:55:20 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:03.376 17:55:20 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:03.376 17:55:20 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:03.376 17:55:20 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:03.376 17:55:20 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:03.376 17:55:20 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:03.376 17:55:20 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:03.376 17:55:20 -- scripts/common.sh@422 -- # local spdk_guid 00:09:03.376 17:55:20 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:03.376 17:55:20 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:03.376 17:55:20 -- scripts/common.sh@427 -- # IFS='()' 00:09:03.376 17:55:20 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:03.376 17:55:20 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:03.376 17:55:20 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:03.376 17:55:20 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:03.376 17:55:20 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:03.376 17:55:20 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:03.376 17:55:20 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:09:04.755 The operation has completed successfully. 00:09:04.755 17:55:21 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:09:05.694 The operation has completed successfully. 00:09:05.694 17:55:22 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:07.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.074 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.074 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.074 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.360 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.360 17:55:24 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:07.360 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.360 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.360 [] 00:09:07.360 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.360 17:55:24 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:07.360 17:55:24 -- bdev/blockdev.sh@79 -- # local json 00:09:07.360 17:55:24 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:07.360 17:55:24 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:07.360 17:55:24 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:07.360 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.360 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.647 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.647 17:55:24 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:07.647 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.647 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.647 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.647 17:55:24 -- bdev/blockdev.sh@738 -- # cat 00:09:07.647 17:55:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:07.647 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.647 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.647 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.647 17:55:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:07.647 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.647 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.905 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.905 17:55:24 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:07.905 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.905 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.905 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.905 17:55:24 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:07.905 17:55:24 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:07.905 17:55:24 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:07.905 17:55:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.905 17:55:24 -- common/autotest_common.sh@10 -- # set +x 00:09:07.905 17:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.905 17:55:24 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:07.905 17:55:24 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:07.905 17:55:24 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c876c6d0-2807-44cb-991a-becc431ce826"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c876c6d0-2807-44cb-991a-becc431ce826",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "994af8dc-5fcf-411c-a8ab-583d1a5b878c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "994af8dc-5fcf-411c-a8ab-583d1a5b878c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4973e23c-acd8-4287-b72e-aeb44e4da378"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4973e23c-acd8-4287-b72e-aeb44e4da378",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7a6bcb7c-dd5c-4487-8ab4-ac993ffccf55"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7a6bcb7c-dd5c-4487-8ab4-ac993ffccf55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9b2f1058-354c-4f1e-ae13-38c0264de3e2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9b2f1058-354c-4f1e-ae13-38c0264de3e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:07.905 17:55:24 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:07.905 17:55:24 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:07.905 17:55:24 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:07.905 17:55:24 -- bdev/blockdev.sh@752 -- # killprocess 73141 00:09:07.905 17:55:24 -- common/autotest_common.sh@936 -- # '[' -z 73141 ']' 00:09:07.905 17:55:24 -- common/autotest_common.sh@940 -- # kill -0 73141 00:09:07.905 17:55:24 -- common/autotest_common.sh@941 -- # uname 00:09:07.905 17:55:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.905 17:55:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73141 00:09:07.905 17:55:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.905 17:55:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.905 killing process with pid 73141 00:09:07.905 17:55:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73141' 00:09:07.905 17:55:24 -- common/autotest_common.sh@955 -- # kill 73141 00:09:07.905 17:55:24 -- common/autotest_common.sh@960 -- # wait 73141 00:09:08.473 17:55:25 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:08.473 17:55:25 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:08.473 17:55:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:08.473 17:55:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.473 17:55:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.473 ************************************ 00:09:08.473 START TEST bdev_hello_world 00:09:08.473 ************************************ 00:09:08.473 17:55:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:08.473 [2024-11-26 17:55:25.270852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:08.473 [2024-11-26 17:55:25.270982] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73792 ] 00:09:08.732 [2024-11-26 17:55:25.422866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.732 [2024-11-26 17:55:25.463250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.992 [2024-11-26 17:55:25.837804] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:08.992 [2024-11-26 17:55:25.837855] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:08.992 [2024-11-26 17:55:25.837880] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:08.992 [2024-11-26 17:55:25.840168] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:08.992 [2024-11-26 17:55:25.840704] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:08.992 [2024-11-26 17:55:25.840741] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:08.992 [2024-11-26 17:55:25.840976] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:08.992 00:09:08.992 [2024-11-26 17:55:25.841019] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:09.251 00:09:09.251 real 0m0.884s 00:09:09.251 user 0m0.568s 00:09:09.251 sys 0m0.214s 00:09:09.251 17:55:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.251 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.251 ************************************ 00:09:09.251 END TEST bdev_hello_world 00:09:09.251 ************************************ 00:09:09.251 17:55:26 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:09.251 17:55:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:09.251 17:55:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.251 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.251 ************************************ 00:09:09.251 START TEST bdev_bounds 00:09:09.251 ************************************ 00:09:09.251 17:55:26 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:09:09.251 17:55:26 -- bdev/blockdev.sh@288 -- # bdevio_pid=73823 00:09:09.251 17:55:26 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:09.251 17:55:26 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.251 Process bdevio pid: 73823 00:09:09.251 17:55:26 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 73823' 00:09:09.251 17:55:26 -- bdev/blockdev.sh@291 -- # waitforlisten 73823 00:09:09.251 17:55:26 -- common/autotest_common.sh@829 -- # '[' -z 73823 ']' 00:09:09.251 17:55:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.251 17:55:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.251 17:55:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.251 17:55:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.251 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.510 [2024-11-26 17:55:26.234385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:09.510 [2024-11-26 17:55:26.234528] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73823 ] 00:09:09.510 [2024-11-26 17:55:26.385994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.510 [2024-11-26 17:55:26.429683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.510 [2024-11-26 17:55:26.429737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.510 [2024-11-26 17:55:26.429822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.446 17:55:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.446 17:55:27 -- common/autotest_common.sh@862 -- # return 0 00:09:10.446 17:55:27 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:10.446 I/O targets: 00:09:10.446 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:10.446 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:10.446 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:10.446 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:10.446 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:10.446 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:10.446 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:10.446 00:09:10.446 00:09:10.446 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.446 http://cunit.sourceforge.net/ 00:09:10.446 00:09:10.446 00:09:10.446 Suite: bdevio tests on: Nvme3n1 00:09:10.446 Test: blockdev write read block ...passed 00:09:10.446 Test: blockdev write zeroes read block ...passed 00:09:10.446 Test: blockdev write zeroes read no split ...passed 00:09:10.446 Test: blockdev write zeroes read split ...passed 00:09:10.446 Test: blockdev write zeroes read split partial ...passed 00:09:10.446 Test: blockdev reset ...[2024-11-26 17:55:27.177421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:10.446 [2024-11-26 17:55:27.179480] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.446 passed 00:09:10.446 Test: blockdev write read 8 blocks ...passed 00:09:10.446 Test: blockdev write read size > 128k ...passed 00:09:10.446 Test: blockdev write read invalid size ...passed 00:09:10.446 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.446 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.446 Test: blockdev write read max offset ...passed 00:09:10.446 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.446 Test: blockdev writev readv 8 blocks ...passed 00:09:10.446 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.446 Test: blockdev writev readv block ...passed 00:09:10.446 Test: blockdev writev readv size > 128k ...passed 00:09:10.446 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.446 Test: blockdev comparev and writev ...[2024-11-26 17:55:27.185782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd206000 len:0x1000 00:09:10.446 [2024-11-26 17:55:27.185843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.446 passed 00:09:10.446 Test: blockdev nvme passthru rw ...passed 00:09:10.446 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.446 Test: blockdev nvme admin passthru ...[2024-11-26 17:55:27.186693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.446 [2024-11-26 17:55:27.186734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.446 passed 00:09:10.446 Test: blockdev copy ...passed 00:09:10.447 Suite: bdevio tests on: Nvme2n3 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.447 Test: blockdev write zeroes read no split ...passed 00:09:10.447 Test: blockdev write zeroes read split ...passed 00:09:10.447 Test: blockdev write zeroes read split partial ...passed 00:09:10.447 Test: blockdev reset ...[2024-11-26 17:55:27.200425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.447 passed 00:09:10.447 Test: blockdev write read 8 blocks ...[2024-11-26 17:55:27.202586] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.447 passed 00:09:10.447 Test: blockdev write read size > 128k ...passed 00:09:10.447 Test: blockdev write read invalid size ...passed 00:09:10.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.447 Test: blockdev write read max offset ...passed 00:09:10.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.447 Test: blockdev writev readv 8 blocks ...passed 00:09:10.447 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.447 Test: blockdev writev readv block ...passed 00:09:10.447 Test: blockdev writev readv size > 128k ...passed 00:09:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.447 Test: blockdev comparev and writev ...[2024-11-26 17:55:27.209088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4606000 len:0x1000 00:09:10.447 [2024-11-26 17:55:27.209136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev nvme passthru rw ...passed 00:09:10.447 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.447 Test: blockdev nvme admin passthru ...[2024-11-26 17:55:27.210018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.447 [2024-11-26 17:55:27.210062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev copy ...passed 00:09:10.447 Suite: bdevio tests on: Nvme2n2 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.447 Test: blockdev write zeroes read no split ...passed 00:09:10.447 Test: blockdev write zeroes read split ...passed 00:09:10.447 Test: blockdev write zeroes read split partial ...passed 00:09:10.447 Test: blockdev reset ...[2024-11-26 17:55:27.228119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.447 [2024-11-26 17:55:27.230180] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.447 passed 00:09:10.447 Test: blockdev write read 8 blocks ...passed 00:09:10.447 Test: blockdev write read size > 128k ...passed 00:09:10.447 Test: blockdev write read invalid size ...passed 00:09:10.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.447 Test: blockdev write read max offset ...passed 00:09:10.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.447 Test: blockdev writev readv 8 blocks ...passed 00:09:10.447 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.447 Test: blockdev writev readv block ...passed 00:09:10.447 Test: blockdev writev readv size > 128k ...passed 00:09:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.447 Test: blockdev comparev and writev ...[2024-11-26 17:55:27.237045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4602000 len:0x1000 00:09:10.447 [2024-11-26 17:55:27.237094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev nvme passthru rw ...passed 00:09:10.447 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.447 Test: blockdev nvme admin passthru ...[2024-11-26 17:55:27.237975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.447 [2024-11-26 17:55:27.238003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev copy ...passed 00:09:10.447 Suite: bdevio tests on: Nvme2n1 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.447 Test: blockdev write zeroes read no split ...passed 00:09:10.447 Test: blockdev write zeroes read split ...passed 00:09:10.447 Test: blockdev write zeroes read split partial ...passed 00:09:10.447 Test: blockdev reset ...[2024-11-26 17:55:27.256992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:10.447 [2024-11-26 17:55:27.259055] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.447 passed 00:09:10.447 Test: blockdev write read 8 blocks ...passed 00:09:10.447 Test: blockdev write read size > 128k ...passed 00:09:10.447 Test: blockdev write read invalid size ...passed 00:09:10.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.447 Test: blockdev write read max offset ...passed 00:09:10.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.447 Test: blockdev writev readv 8 blocks ...passed 00:09:10.447 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.447 Test: blockdev writev readv block ...passed 00:09:10.447 Test: blockdev writev readv size > 128k ...passed 00:09:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.447 Test: blockdev comparev and writev ...[2024-11-26 17:55:27.265970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd20d000 len:0x1000 00:09:10.447 [2024-11-26 17:55:27.266053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev nvme passthru rw ...passed 00:09:10.447 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:55:27.267024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.447 [2024-11-26 17:55:27.267075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev nvme admin passthru ...passed 00:09:10.447 Test: blockdev copy ...passed 00:09:10.447 Suite: bdevio tests on: Nvme1n1 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.447 Test: blockdev write zeroes read no split ...passed 00:09:10.447 Test: blockdev write zeroes read split ...passed 00:09:10.447 Test: blockdev write zeroes read split partial ...passed 00:09:10.447 Test: blockdev reset ...[2024-11-26 17:55:27.295424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:10.447 passed 00:09:10.447 Test: blockdev write read 8 blocks ...[2024-11-26 17:55:27.297225] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.447 passed 00:09:10.447 Test: blockdev write read size > 128k ...passed 00:09:10.447 Test: blockdev write read invalid size ...passed 00:09:10.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.447 Test: blockdev write read max offset ...passed 00:09:10.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.447 Test: blockdev writev readv 8 blocks ...passed 00:09:10.447 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.447 Test: blockdev writev readv block ...passed 00:09:10.447 Test: blockdev writev readv size > 128k ...passed 00:09:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.447 Test: blockdev comparev and writev ...[2024-11-26 17:55:27.304107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bce36000 len:0x1000 00:09:10.447 [2024-11-26 17:55:27.304158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev nvme passthru rw ...passed 00:09:10.447 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.447 Test: blockdev nvme admin passthru ...[2024-11-26 17:55:27.305032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:10.447 [2024-11-26 17:55:27.305067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:10.447 passed 00:09:10.447 Test: blockdev copy ...passed 00:09:10.447 Suite: bdevio tests on: Nvme0n1p2 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.447 Test: blockdev write zeroes read no split ...passed 00:09:10.447 Test: blockdev write zeroes read split ...passed 00:09:10.447 Test: blockdev write zeroes read split partial ...passed 00:09:10.447 Test: blockdev reset ...[2024-11-26 17:55:27.325744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:10.447 [2024-11-26 17:55:27.327718] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.447 passed 00:09:10.447 Test: blockdev write read 8 blocks ...passed 00:09:10.447 Test: blockdev write read size > 128k ...passed 00:09:10.447 Test: blockdev write read invalid size ...passed 00:09:10.447 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.447 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.447 Test: blockdev write read max offset ...passed 00:09:10.447 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.447 Test: blockdev writev readv 8 blocks ...passed 00:09:10.447 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.447 Test: blockdev writev readv block ...passed 00:09:10.447 Test: blockdev writev readv size > 128k ...passed 00:09:10.447 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.447 Test: blockdev comparev and writev ...passed 00:09:10.447 Test: blockdev nvme passthru rw ...passed 00:09:10.447 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.447 Test: blockdev nvme admin passthru ...passed 00:09:10.447 Test: blockdev copy ...[2024-11-26 17:55:27.334344] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:10.447 separate metadata which is not supported yet. 00:09:10.447 passed 00:09:10.447 Suite: bdevio tests on: Nvme0n1p1 00:09:10.447 Test: blockdev write read block ...passed 00:09:10.447 Test: blockdev write zeroes read block ...passed 00:09:10.448 Test: blockdev write zeroes read no split ...passed 00:09:10.448 Test: blockdev write zeroes read split ...passed 00:09:10.448 Test: blockdev write zeroes read split partial ...passed 00:09:10.448 Test: blockdev reset ...[2024-11-26 17:55:27.353195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:10.448 [2024-11-26 17:55:27.355029] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.448 passed 00:09:10.448 Test: blockdev write read 8 blocks ...passed 00:09:10.448 Test: blockdev write read size > 128k ...passed 00:09:10.448 Test: blockdev write read invalid size ...passed 00:09:10.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:10.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:10.448 Test: blockdev write read max offset ...passed 00:09:10.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:10.448 Test: blockdev writev readv 8 blocks ...passed 00:09:10.448 Test: blockdev writev readv 30 x 1block ...passed 00:09:10.448 Test: blockdev writev readv block ...passed 00:09:10.448 Test: blockdev writev readv size > 128k ...passed 00:09:10.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:10.448 Test: blockdev comparev and writev ...passed 00:09:10.448 Test: blockdev nvme passthru rw ...passed 00:09:10.448 Test: blockdev nvme passthru vendor specific ...passed 00:09:10.448 Test: blockdev nvme admin passthru ...passed 00:09:10.448 Test: blockdev copy ...[2024-11-26 17:55:27.361385] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:10.448 separate metadata which is not supported yet. 00:09:10.448 passed 00:09:10.448 00:09:10.448 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.448 suites 7 7 n/a 0 0 00:09:10.448 tests 161 161 161 0 0 00:09:10.448 asserts 1006 1006 1006 0 n/a 00:09:10.448 00:09:10.448 Elapsed time = 0.468 seconds 00:09:10.448 0 00:09:10.707 17:55:27 -- bdev/blockdev.sh@293 -- # killprocess 73823 00:09:10.707 17:55:27 -- common/autotest_common.sh@936 -- # '[' -z 73823 ']' 00:09:10.707 17:55:27 -- common/autotest_common.sh@940 -- # kill -0 73823 00:09:10.707 17:55:27 -- common/autotest_common.sh@941 -- # uname 00:09:10.707 17:55:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.707 17:55:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73823 00:09:10.707 17:55:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:10.707 17:55:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:10.707 killing process with pid 73823 00:09:10.707 17:55:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73823' 00:09:10.707 17:55:27 -- common/autotest_common.sh@955 -- # kill 73823 00:09:10.707 17:55:27 -- common/autotest_common.sh@960 -- # wait 73823 00:09:10.966 17:55:27 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:10.966 00:09:10.966 real 0m1.491s 00:09:10.966 user 0m3.653s 00:09:10.966 sys 0m0.373s 00:09:10.966 17:55:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.966 ************************************ 00:09:10.966 END TEST bdev_bounds 00:09:10.966 ************************************ 00:09:10.966 17:55:27 -- common/autotest_common.sh@10 -- # set +x 00:09:10.966 17:55:27 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:10.966 17:55:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:10.966 17:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.966 17:55:27 -- common/autotest_common.sh@10 -- # set +x 00:09:10.966 ************************************ 00:09:10.966 START TEST bdev_nbd 00:09:10.966 ************************************ 00:09:10.966 17:55:27 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:10.966 17:55:27 -- bdev/blockdev.sh@298 -- # uname -s 00:09:10.966 17:55:27 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:10.966 17:55:27 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.966 17:55:27 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:10.966 17:55:27 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.966 17:55:27 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:10.966 17:55:27 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:10.966 17:55:27 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:10.966 17:55:27 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:10.966 17:55:27 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:10.966 17:55:27 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:10.966 17:55:27 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:10.966 17:55:27 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:10.966 17:55:27 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:10.966 17:55:27 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:10.966 17:55:27 -- bdev/blockdev.sh@316 -- # nbd_pid=73872 00:09:10.966 17:55:27 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:10.966 17:55:27 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:10.966 17:55:27 -- bdev/blockdev.sh@318 -- # waitforlisten 73872 /var/tmp/spdk-nbd.sock 00:09:10.966 17:55:27 -- common/autotest_common.sh@829 -- # '[' -z 73872 ']' 00:09:10.966 17:55:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:10.966 17:55:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:10.966 17:55:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:10.966 17:55:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.966 17:55:27 -- common/autotest_common.sh@10 -- # set +x 00:09:10.966 [2024-11-26 17:55:27.819967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:10.966 [2024-11-26 17:55:27.820102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.225 [2024-11-26 17:55:27.961603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.225 [2024-11-26 17:55:28.000970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.790 17:55:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.790 17:55:28 -- common/autotest_common.sh@862 -- # return 0 00:09:11.790 17:55:28 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@24 -- # local i 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:11.790 17:55:28 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:12.049 17:55:28 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:12.049 17:55:28 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:12.049 17:55:28 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:12.049 17:55:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:12.049 17:55:28 -- common/autotest_common.sh@867 -- # local i 00:09:12.049 17:55:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.049 17:55:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.049 17:55:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:12.049 17:55:28 -- common/autotest_common.sh@871 -- # break 00:09:12.049 17:55:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.049 17:55:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.050 17:55:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.050 1+0 records in 00:09:12.050 1+0 records out 00:09:12.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608073 s, 6.7 MB/s 00:09:12.050 17:55:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.050 17:55:28 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.050 17:55:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.050 17:55:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.050 17:55:28 -- common/autotest_common.sh@887 -- # return 0 00:09:12.050 17:55:28 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:12.050 17:55:28 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:12.050 17:55:28 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:12.308 17:55:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:12.308 17:55:29 -- common/autotest_common.sh@867 -- # local i 00:09:12.308 17:55:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.308 17:55:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.308 17:55:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:12.308 17:55:29 -- common/autotest_common.sh@871 -- # break 00:09:12.308 17:55:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.308 17:55:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.308 17:55:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.308 1+0 records in 00:09:12.308 1+0 records out 00:09:12.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383597 s, 10.7 MB/s 00:09:12.308 17:55:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.308 17:55:29 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.308 17:55:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.308 17:55:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.308 17:55:29 -- common/autotest_common.sh@887 -- # return 0 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:12.308 17:55:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:12.565 17:55:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:12.565 17:55:29 -- common/autotest_common.sh@867 -- # local i 00:09:12.565 17:55:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.565 17:55:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.565 17:55:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:12.565 17:55:29 -- common/autotest_common.sh@871 -- # break 00:09:12.565 17:55:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.565 17:55:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.565 17:55:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.565 1+0 records in 00:09:12.565 1+0 records out 00:09:12.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719523 s, 5.7 MB/s 00:09:12.565 17:55:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.565 17:55:29 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.565 17:55:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.565 17:55:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.565 17:55:29 -- common/autotest_common.sh@887 -- # return 0 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:12.565 17:55:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:12.823 17:55:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:12.823 17:55:29 -- common/autotest_common.sh@867 -- # local i 00:09:12.823 17:55:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:12.823 17:55:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:12.823 17:55:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:12.823 17:55:29 -- common/autotest_common.sh@871 -- # break 00:09:12.823 17:55:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:12.823 17:55:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:12.823 17:55:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.823 1+0 records in 00:09:12.823 1+0 records out 00:09:12.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694961 s, 5.9 MB/s 00:09:12.823 17:55:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.823 17:55:29 -- common/autotest_common.sh@884 -- # size=4096 00:09:12.823 17:55:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.823 17:55:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:12.823 17:55:29 -- common/autotest_common.sh@887 -- # return 0 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:12.823 17:55:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:13.082 17:55:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:13.082 17:55:29 -- common/autotest_common.sh@867 -- # local i 00:09:13.082 17:55:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:13.082 17:55:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:13.082 17:55:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:13.082 17:55:29 -- common/autotest_common.sh@871 -- # break 00:09:13.082 17:55:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:13.082 17:55:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:13.082 17:55:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.082 1+0 records in 00:09:13.082 1+0 records out 00:09:13.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617088 s, 6.6 MB/s 00:09:13.082 17:55:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.082 17:55:29 -- common/autotest_common.sh@884 -- # size=4096 00:09:13.082 17:55:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.082 17:55:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:13.082 17:55:29 -- common/autotest_common.sh@887 -- # return 0 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.082 17:55:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:13.342 17:55:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:13.342 17:55:30 -- common/autotest_common.sh@867 -- # local i 00:09:13.342 17:55:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:13.342 17:55:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:13.342 17:55:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:13.342 17:55:30 -- common/autotest_common.sh@871 -- # break 00:09:13.342 17:55:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:13.342 17:55:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:13.342 17:55:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.342 1+0 records in 00:09:13.342 1+0 records out 00:09:13.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00416212 s, 984 kB/s 00:09:13.342 17:55:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.342 17:55:30 -- common/autotest_common.sh@884 -- # size=4096 00:09:13.342 17:55:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.342 17:55:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:13.342 17:55:30 -- common/autotest_common.sh@887 -- # return 0 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.342 17:55:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:13.602 17:55:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:13.602 17:55:30 -- common/autotest_common.sh@867 -- # local i 00:09:13.602 17:55:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:13.602 17:55:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:13.602 17:55:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:13.602 17:55:30 -- common/autotest_common.sh@871 -- # break 00:09:13.602 17:55:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:13.602 17:55:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:13.602 17:55:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.602 1+0 records in 00:09:13.602 1+0 records out 00:09:13.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879583 s, 4.7 MB/s 00:09:13.602 17:55:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.602 17:55:30 -- common/autotest_common.sh@884 -- # size=4096 00:09:13.602 17:55:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.602 17:55:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:13.602 17:55:30 -- common/autotest_common.sh@887 -- # return 0 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd0", 00:09:13.602 "bdev_name": "Nvme0n1p1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd1", 00:09:13.602 "bdev_name": "Nvme0n1p2" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd2", 00:09:13.602 "bdev_name": "Nvme1n1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd3", 00:09:13.602 "bdev_name": "Nvme2n1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd4", 00:09:13.602 "bdev_name": "Nvme2n2" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd5", 00:09:13.602 "bdev_name": "Nvme2n3" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd6", 00:09:13.602 "bdev_name": "Nvme3n1" 00:09:13.602 } 00:09:13.602 ]' 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:13.602 17:55:30 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd0", 00:09:13.602 "bdev_name": "Nvme0n1p1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd1", 00:09:13.602 "bdev_name": "Nvme0n1p2" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd2", 00:09:13.602 "bdev_name": "Nvme1n1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd3", 00:09:13.602 "bdev_name": "Nvme2n1" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd4", 00:09:13.602 "bdev_name": "Nvme2n2" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd5", 00:09:13.602 "bdev_name": "Nvme2n3" 00:09:13.602 }, 00:09:13.602 { 00:09:13.602 "nbd_device": "/dev/nbd6", 00:09:13.602 "bdev_name": "Nvme3n1" 00:09:13.602 } 00:09:13.602 ]' 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@51 -- # local i 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@41 -- # break 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.861 17:55:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@41 -- # break 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.120 17:55:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@41 -- # break 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.380 17:55:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@41 -- # break 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.639 17:55:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@41 -- # break 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.898 17:55:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@41 -- # break 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.157 17:55:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@41 -- # break 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.157 17:55:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@65 -- # true 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@122 -- # count=0 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@127 -- # return 0 00:09:15.442 17:55:32 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@12 -- # local i 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.442 17:55:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:15.701 /dev/nbd0 00:09:15.701 17:55:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.701 17:55:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.701 17:55:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:15.701 17:55:32 -- common/autotest_common.sh@867 -- # local i 00:09:15.701 17:55:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:15.701 17:55:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:15.701 17:55:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:15.701 17:55:32 -- common/autotest_common.sh@871 -- # break 00:09:15.701 17:55:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:15.701 17:55:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:15.701 17:55:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.701 1+0 records in 00:09:15.701 1+0 records out 00:09:15.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798643 s, 5.1 MB/s 00:09:15.701 17:55:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.701 17:55:32 -- common/autotest_common.sh@884 -- # size=4096 00:09:15.701 17:55:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.701 17:55:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.701 17:55:32 -- common/autotest_common.sh@887 -- # return 0 00:09:15.701 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.701 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.701 17:55:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:15.960 /dev/nbd1 00:09:15.960 17:55:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:15.960 17:55:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:15.960 17:55:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:15.960 17:55:32 -- common/autotest_common.sh@867 -- # local i 00:09:15.960 17:55:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:15.960 17:55:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:15.960 17:55:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:15.960 17:55:32 -- common/autotest_common.sh@871 -- # break 00:09:15.960 17:55:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:15.960 17:55:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:15.960 17:55:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.960 1+0 records in 00:09:15.960 1+0 records out 00:09:15.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664884 s, 6.2 MB/s 00:09:15.960 17:55:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.960 17:55:32 -- common/autotest_common.sh@884 -- # size=4096 00:09:15.960 17:55:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.960 17:55:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:15.960 17:55:32 -- common/autotest_common.sh@887 -- # return 0 00:09:15.960 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.960 17:55:32 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.960 17:55:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:16.219 /dev/nbd10 00:09:16.219 17:55:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:16.219 17:55:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:16.219 17:55:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:16.219 17:55:33 -- common/autotest_common.sh@867 -- # local i 00:09:16.219 17:55:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:16.219 17:55:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:16.219 17:55:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:16.219 17:55:33 -- common/autotest_common.sh@871 -- # break 00:09:16.219 17:55:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:16.219 17:55:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:16.219 17:55:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.219 1+0 records in 00:09:16.219 1+0 records out 00:09:16.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054488 s, 7.5 MB/s 00:09:16.219 17:55:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.219 17:55:33 -- common/autotest_common.sh@884 -- # size=4096 00:09:16.219 17:55:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.219 17:55:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:16.219 17:55:33 -- common/autotest_common.sh@887 -- # return 0 00:09:16.219 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.219 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:16.219 17:55:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:16.479 /dev/nbd11 00:09:16.479 17:55:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:16.479 17:55:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:16.479 17:55:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:16.479 17:55:33 -- common/autotest_common.sh@867 -- # local i 00:09:16.479 17:55:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:16.479 17:55:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:16.479 17:55:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:16.479 17:55:33 -- common/autotest_common.sh@871 -- # break 00:09:16.479 17:55:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:16.479 17:55:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:16.479 17:55:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.479 1+0 records in 00:09:16.479 1+0 records out 00:09:16.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485012 s, 8.4 MB/s 00:09:16.479 17:55:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.479 17:55:33 -- common/autotest_common.sh@884 -- # size=4096 00:09:16.479 17:55:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.479 17:55:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:16.479 17:55:33 -- common/autotest_common.sh@887 -- # return 0 00:09:16.479 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.479 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:16.479 17:55:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:16.738 /dev/nbd12 00:09:16.738 17:55:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:16.738 17:55:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:16.738 17:55:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:16.738 17:55:33 -- common/autotest_common.sh@867 -- # local i 00:09:16.738 17:55:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:16.738 17:55:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:16.738 17:55:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:16.738 17:55:33 -- common/autotest_common.sh@871 -- # break 00:09:16.738 17:55:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:16.738 17:55:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:16.738 17:55:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.738 1+0 records in 00:09:16.738 1+0 records out 00:09:16.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731964 s, 5.6 MB/s 00:09:16.738 17:55:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.738 17:55:33 -- common/autotest_common.sh@884 -- # size=4096 00:09:16.738 17:55:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.738 17:55:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:16.738 17:55:33 -- common/autotest_common.sh@887 -- # return 0 00:09:16.738 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.738 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:16.738 17:55:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:16.998 /dev/nbd13 00:09:16.998 17:55:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:16.998 17:55:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:16.998 17:55:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:16.998 17:55:33 -- common/autotest_common.sh@867 -- # local i 00:09:16.998 17:55:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:16.998 17:55:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:16.998 17:55:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:16.998 17:55:33 -- common/autotest_common.sh@871 -- # break 00:09:16.998 17:55:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:16.998 17:55:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:16.998 17:55:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.998 1+0 records in 00:09:16.998 1+0 records out 00:09:16.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697206 s, 5.9 MB/s 00:09:16.998 17:55:33 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.998 17:55:33 -- common/autotest_common.sh@884 -- # size=4096 00:09:16.998 17:55:33 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.998 17:55:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:16.998 17:55:33 -- common/autotest_common.sh@887 -- # return 0 00:09:16.998 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.998 17:55:33 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:16.998 17:55:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:17.257 /dev/nbd14 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:17.257 17:55:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:17.257 17:55:34 -- common/autotest_common.sh@867 -- # local i 00:09:17.257 17:55:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:17.257 17:55:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:17.257 17:55:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:17.257 17:55:34 -- common/autotest_common.sh@871 -- # break 00:09:17.257 17:55:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:17.257 17:55:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:17.257 17:55:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.257 1+0 records in 00:09:17.257 1+0 records out 00:09:17.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903656 s, 4.5 MB/s 00:09:17.257 17:55:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.257 17:55:34 -- common/autotest_common.sh@884 -- # size=4096 00:09:17.257 17:55:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.257 17:55:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:17.257 17:55:34 -- common/autotest_common.sh@887 -- # return 0 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.257 17:55:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.516 17:55:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd0", 00:09:17.516 "bdev_name": "Nvme0n1p1" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd1", 00:09:17.516 "bdev_name": "Nvme0n1p2" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd10", 00:09:17.516 "bdev_name": "Nvme1n1" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd11", 00:09:17.516 "bdev_name": "Nvme2n1" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd12", 00:09:17.516 "bdev_name": "Nvme2n2" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd13", 00:09:17.516 "bdev_name": "Nvme2n3" 00:09:17.516 }, 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd14", 00:09:17.516 "bdev_name": "Nvme3n1" 00:09:17.516 } 00:09:17.516 ]' 00:09:17.516 17:55:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:17.516 { 00:09:17.516 "nbd_device": "/dev/nbd0", 00:09:17.517 "bdev_name": "Nvme0n1p1" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd1", 00:09:17.517 "bdev_name": "Nvme0n1p2" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd10", 00:09:17.517 "bdev_name": "Nvme1n1" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd11", 00:09:17.517 "bdev_name": "Nvme2n1" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd12", 00:09:17.517 "bdev_name": "Nvme2n2" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd13", 00:09:17.517 "bdev_name": "Nvme2n3" 00:09:17.517 }, 00:09:17.517 { 00:09:17.517 "nbd_device": "/dev/nbd14", 00:09:17.517 "bdev_name": "Nvme3n1" 00:09:17.517 } 00:09:17.517 ]' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:17.517 /dev/nbd1 00:09:17.517 /dev/nbd10 00:09:17.517 /dev/nbd11 00:09:17.517 /dev/nbd12 00:09:17.517 /dev/nbd13 00:09:17.517 /dev/nbd14' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:17.517 /dev/nbd1 00:09:17.517 /dev/nbd10 00:09:17.517 /dev/nbd11 00:09:17.517 /dev/nbd12 00:09:17.517 /dev/nbd13 00:09:17.517 /dev/nbd14' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@65 -- # count=7 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@95 -- # count=7 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:17.517 256+0 records in 00:09:17.517 256+0 records out 00:09:17.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117369 s, 89.3 MB/s 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.517 17:55:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:17.776 256+0 records in 00:09:17.776 256+0 records out 00:09:17.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140714 s, 7.5 MB/s 00:09:17.776 17:55:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.776 17:55:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:17.776 256+0 records in 00:09:17.776 256+0 records out 00:09:17.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13912 s, 7.5 MB/s 00:09:17.776 17:55:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:17.776 17:55:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:18.036 256+0 records in 00:09:18.036 256+0 records out 00:09:18.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138254 s, 7.6 MB/s 00:09:18.036 17:55:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.036 17:55:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:18.036 256+0 records in 00:09:18.036 256+0 records out 00:09:18.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137741 s, 7.6 MB/s 00:09:18.036 17:55:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.036 17:55:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:18.295 256+0 records in 00:09:18.295 256+0 records out 00:09:18.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13495 s, 7.8 MB/s 00:09:18.295 17:55:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.295 17:55:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:18.295 256+0 records in 00:09:18.295 256+0 records out 00:09:18.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137367 s, 7.6 MB/s 00:09:18.295 17:55:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:18.295 17:55:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:18.555 256+0 records in 00:09:18.555 256+0 records out 00:09:18.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139544 s, 7.5 MB/s 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@51 -- # local i 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.555 17:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@41 -- # break 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.814 17:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@41 -- # break 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.074 17:55:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@41 -- # break 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.333 17:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@41 -- # break 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@41 -- # break 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.592 17:55:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@41 -- # break 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.851 17:55:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@41 -- # break 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.110 17:55:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@65 -- # true 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@104 -- # count=0 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@109 -- # return 0 00:09:20.369 17:55:37 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:20.369 17:55:37 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:20.628 malloc_lvol_verify 00:09:20.628 17:55:37 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:20.628 e35450d8-e0c4-45f3-bcd5-a526639a8e3b 00:09:20.628 17:55:37 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:20.887 992ebc72-d4c7-4f1c-be61-892805662f27 00:09:20.887 17:55:37 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:21.144 /dev/nbd0 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:21.144 mke2fs 1.47.0 (5-Feb-2023) 00:09:21.144 Discarding device blocks: 0/4096 done 00:09:21.144 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:21.144 00:09:21.144 Allocating group tables: 0/1 done 00:09:21.144 Writing inode tables: 0/1 done 00:09:21.144 Creating journal (1024 blocks): done 00:09:21.144 Writing superblocks and filesystem accounting information: 0/1 done 00:09:21.144 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@51 -- # local i 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.144 17:55:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@41 -- # break 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:21.403 17:55:38 -- bdev/nbd_common.sh@147 -- # return 0 00:09:21.403 17:55:38 -- bdev/blockdev.sh@324 -- # killprocess 73872 00:09:21.403 17:55:38 -- common/autotest_common.sh@936 -- # '[' -z 73872 ']' 00:09:21.403 17:55:38 -- common/autotest_common.sh@940 -- # kill -0 73872 00:09:21.403 17:55:38 -- common/autotest_common.sh@941 -- # uname 00:09:21.403 17:55:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.403 17:55:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73872 00:09:21.403 17:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.403 killing process with pid 73872 00:09:21.403 17:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.403 17:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73872' 00:09:21.403 17:55:38 -- common/autotest_common.sh@955 -- # kill 73872 00:09:21.403 17:55:38 -- common/autotest_common.sh@960 -- # wait 73872 00:09:21.662 17:55:38 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:21.662 00:09:21.662 real 0m10.769s 00:09:21.662 user 0m14.115s 00:09:21.662 sys 0m5.025s 00:09:21.662 17:55:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.662 17:55:38 -- common/autotest_common.sh@10 -- # set +x 00:09:21.662 ************************************ 00:09:21.662 END TEST bdev_nbd 00:09:21.662 ************************************ 00:09:21.662 17:55:38 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:21.662 17:55:38 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:21.662 17:55:38 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:21.662 skipping fio tests on NVMe due to multi-ns failures. 00:09:21.662 17:55:38 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:21.662 17:55:38 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:21.662 17:55:38 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.662 17:55:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:21.662 17:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.662 17:55:38 -- common/autotest_common.sh@10 -- # set +x 00:09:21.662 ************************************ 00:09:21.662 START TEST bdev_verify 00:09:21.662 ************************************ 00:09:21.662 17:55:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.922 [2024-11-26 17:55:38.631275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:21.922 [2024-11-26 17:55:38.631407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74277 ] 00:09:21.922 [2024-11-26 17:55:38.783193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.922 [2024-11-26 17:55:38.824032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.922 [2024-11-26 17:55:38.824151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.489 Running I/O for 5 seconds... 00:09:27.762 00:09:27.762 Latency(us) 00:09:27.762 [2024-11-26T17:55:44.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x0 length 0x5e800 00:09:27.762 Nvme0n1p1 : 5.04 2656.21 10.38 0.00 0.00 48007.43 10633.15 64009.46 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x5e800 length 0x5e800 00:09:27.762 Nvme0n1p1 : 5.04 2655.85 10.37 0.00 0.00 48015.10 10738.43 63167.23 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x0 length 0x5e7ff 00:09:27.762 Nvme0n1p2 : 5.05 2660.76 10.39 0.00 0.00 47925.20 6790.48 61061.65 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:27.762 Nvme0n1p2 : 5.05 2660.95 10.39 0.00 0.00 47926.56 5474.49 59798.31 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x0 length 0xa0000 00:09:27.762 Nvme1n1 : 5.06 2659.63 10.39 0.00 0.00 47859.11 7895.90 52849.91 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0xa0000 length 0xa0000 00:09:27.762 Nvme1n1 : 5.05 2659.80 10.39 0.00 0.00 47815.74 6948.40 46743.75 00:09:27.762 [2024-11-26T17:55:44.688Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.762 Verification LBA range: start 0x0 length 0x80000 00:09:27.763 Nvme2n1 : 5.06 2664.81 10.41 0.00 0.00 47705.72 3184.68 46112.08 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x80000 length 0x80000 00:09:27.763 Nvme2n1 : 5.06 2664.99 10.41 0.00 0.00 47703.24 3447.88 45480.40 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x0 length 0x80000 00:09:27.763 Nvme2n2 : 5.06 2663.68 10.41 0.00 0.00 47658.32 4369.07 46112.08 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x80000 length 0x80000 00:09:27.763 Nvme2n2 : 5.06 2663.84 10.41 0.00 0.00 47657.61 4737.54 45269.85 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x0 length 0x80000 00:09:27.763 Nvme2n3 : 5.07 2662.57 10.40 0.00 0.00 47617.70 5474.49 45480.40 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x80000 length 0x80000 00:09:27.763 Nvme2n3 : 5.07 2662.73 10.40 0.00 0.00 47629.07 5869.29 45480.40 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x0 length 0x20000 00:09:27.763 Nvme3n1 : 5.07 2667.66 10.42 0.00 0.00 47510.27 1006.73 45901.52 00:09:27.763 [2024-11-26T17:55:44.689Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.763 Verification LBA range: start 0x20000 length 0x20000 00:09:27.763 Nvme3n1 : 5.07 2661.59 10.40 0.00 0.00 47601.29 7053.67 44427.62 00:09:27.763 [2024-11-26T17:55:44.689Z] =================================================================================================================== 00:09:27.763 [2024-11-26T17:55:44.689Z] Total : 37265.09 145.57 0.00 0.00 47759.06 1006.73 64009.46 00:09:28.702 00:09:28.702 real 0m6.835s 00:09:28.702 user 0m12.836s 00:09:28.702 sys 0m0.275s 00:09:28.702 17:55:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.702 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:28.702 ************************************ 00:09:28.702 END TEST bdev_verify 00:09:28.702 ************************************ 00:09:28.702 17:55:45 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:28.702 17:55:45 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:28.702 17:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.702 17:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:28.702 ************************************ 00:09:28.702 START TEST bdev_verify_big_io 00:09:28.702 ************************************ 00:09:28.702 17:55:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:28.702 [2024-11-26 17:55:45.531586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:28.702 [2024-11-26 17:55:45.531711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74375 ] 00:09:28.961 [2024-11-26 17:55:45.665595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.961 [2024-11-26 17:55:45.705920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.961 [2024-11-26 17:55:45.705997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.529 Running I/O for 5 seconds... 00:09:34.803 00:09:34.803 Latency(us) 00:09:34.803 [2024-11-26T17:55:51.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0x0 length 0x5e80 00:09:34.803 Nvme0n1p1 : 5.31 295.22 18.45 0.00 0.00 427135.77 38110.89 609774.32 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0x5e80 length 0x5e80 00:09:34.803 Nvme0n1p1 : 5.31 294.88 18.43 0.00 0.00 428057.01 36636.99 613143.24 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0x0 length 0x5e7f 00:09:34.803 Nvme0n1p2 : 5.31 295.13 18.45 0.00 0.00 422993.15 38110.89 559240.53 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:34.803 Nvme0n1p2 : 5.32 294.78 18.42 0.00 0.00 423435.76 36636.99 562609.45 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0x0 length 0xa000 00:09:34.803 Nvme1n1 : 5.31 295.06 18.44 0.00 0.00 418505.20 37689.78 515444.59 00:09:34.803 [2024-11-26T17:55:51.729Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.803 Verification LBA range: start 0xa000 length 0xa000 00:09:34.803 Nvme1n1 : 5.32 294.70 18.42 0.00 0.00 418834.27 37058.11 518813.51 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x0 length 0x8000 00:09:34.804 Nvme2n1 : 5.35 300.83 18.80 0.00 0.00 406174.44 38953.12 471648.64 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x8000 length 0x8000 00:09:34.804 Nvme2n1 : 5.32 294.62 18.41 0.00 0.00 414237.48 37479.22 475017.56 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x0 length 0x8000 00:09:34.804 Nvme2n2 : 5.35 300.74 18.80 0.00 0.00 401770.65 39374.24 485124.32 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x8000 length 0x8000 00:09:34.804 Nvme2n2 : 5.36 300.29 18.77 0.00 0.00 401638.95 40848.14 488493.24 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x0 length 0x8000 00:09:34.804 Nvme2n3 : 5.38 317.36 19.83 0.00 0.00 379245.24 3711.07 495231.07 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x8000 length 0x8000 00:09:34.804 Nvme2n3 : 5.38 317.02 19.81 0.00 0.00 378997.23 4316.43 498599.99 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x0 length 0x2000 00:09:34.804 Nvme3n1 : 5.38 324.62 20.29 0.00 0.00 366931.26 3303.12 660308.10 00:09:34.804 [2024-11-26T17:55:51.730Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.804 Verification LBA range: start 0x2000 length 0x2000 00:09:34.804 Nvme3n1 : 5.39 324.30 20.27 0.00 0.00 366598.84 2381.93 778220.26 00:09:34.804 [2024-11-26T17:55:51.730Z] =================================================================================================================== 00:09:34.804 [2024-11-26T17:55:51.730Z] Total : 4249.53 265.60 0.00 0.00 402995.42 2381.93 778220.26 00:09:35.373 00:09:35.373 real 0m6.710s 00:09:35.373 user 0m12.587s 00:09:35.373 sys 0m0.288s 00:09:35.373 17:55:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.373 ************************************ 00:09:35.373 END TEST bdev_verify_big_io 00:09:35.373 ************************************ 00:09:35.373 17:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 17:55:52 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.373 17:55:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:35.373 17:55:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.373 17:55:52 -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 ************************************ 00:09:35.373 START TEST bdev_write_zeroes 00:09:35.373 ************************************ 00:09:35.373 17:55:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.633 [2024-11-26 17:55:52.321153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:35.633 [2024-11-26 17:55:52.321286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74469 ] 00:09:35.633 [2024-11-26 17:55:52.466187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.633 [2024-11-26 17:55:52.506579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.200 Running I/O for 1 seconds... 00:09:38.102 00:09:38.102 Latency(us) 00:09:38.102 [2024-11-26T17:55:55.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme0n1p1 : 1.58 4006.15 15.65 0.00 0.00 27692.51 5632.41 896132.42 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme0n1p2 : 1.60 3932.26 15.36 0.00 0.00 26384.00 10369.95 970248.64 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme1n1 : 1.04 6251.36 24.42 0.00 0.00 20313.17 9896.20 357105.40 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme2n1 : 1.04 6223.86 24.31 0.00 0.00 20391.17 10212.04 358789.86 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme2n2 : 1.04 6217.90 24.29 0.00 0.00 20328.69 10159.40 355420.94 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme2n3 : 1.04 6211.99 24.27 0.00 0.00 20216.25 10106.76 355420.94 00:09:38.102 [2024-11-26T17:55:55.028Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.102 Nvme3n1 : 1.04 6206.03 24.24 0.00 0.00 20201.85 10001.48 360474.32 00:09:38.102 [2024-11-26T17:55:55.028Z] =================================================================================================================== 00:09:38.102 [2024-11-26T17:55:55.028Z] Total : 39049.55 152.54 0.00 0.00 22184.59 5632.41 970248.64 00:09:38.102 00:09:38.102 real 0m2.544s 00:09:38.102 user 0m2.222s 00:09:38.102 sys 0m0.209s 00:09:38.102 17:55:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.102 ************************************ 00:09:38.102 END TEST bdev_write_zeroes 00:09:38.102 17:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:38.102 ************************************ 00:09:38.102 17:55:54 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.102 17:55:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:38.102 17:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.102 17:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:38.102 ************************************ 00:09:38.102 START TEST bdev_json_nonenclosed 00:09:38.102 ************************************ 00:09:38.102 17:55:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.102 [2024-11-26 17:55:54.936814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:38.102 [2024-11-26 17:55:54.936943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74516 ] 00:09:38.361 [2024-11-26 17:55:55.080606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.361 [2024-11-26 17:55:55.118822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.361 [2024-11-26 17:55:55.119030] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:38.361 [2024-11-26 17:55:55.119062] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.361 00:09:38.361 real 0m0.378s 00:09:38.361 user 0m0.144s 00:09:38.361 sys 0m0.130s 00:09:38.361 17:55:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.361 ************************************ 00:09:38.361 END TEST bdev_json_nonenclosed 00:09:38.361 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.361 ************************************ 00:09:38.361 17:55:55 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.361 17:55:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:38.361 17:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.361 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.620 ************************************ 00:09:38.620 START TEST bdev_json_nonarray 00:09:38.620 ************************************ 00:09:38.620 17:55:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.620 [2024-11-26 17:55:55.373658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:38.620 [2024-11-26 17:55:55.373789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74542 ] 00:09:38.620 [2024-11-26 17:55:55.516520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.879 [2024-11-26 17:55:55.555609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.879 [2024-11-26 17:55:55.555820] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:38.879 [2024-11-26 17:55:55.555852] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.879 00:09:38.879 real 0m0.362s 00:09:38.879 user 0m0.146s 00:09:38.879 sys 0m0.113s 00:09:38.879 17:55:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.879 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.879 ************************************ 00:09:38.879 END TEST bdev_json_nonarray 00:09:38.879 ************************************ 00:09:38.879 17:55:55 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:38.879 17:55:55 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:38.879 17:55:55 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:38.879 17:55:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.879 17:55:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.879 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.879 ************************************ 00:09:38.879 START TEST bdev_gpt_uuid 00:09:38.879 ************************************ 00:09:38.879 17:55:55 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:38.879 17:55:55 -- bdev/blockdev.sh@612 -- # local bdev 00:09:38.879 17:55:55 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:38.879 17:55:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=74566 00:09:38.879 17:55:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:38.879 17:55:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:38.879 17:55:55 -- bdev/blockdev.sh@47 -- # waitforlisten 74566 00:09:38.879 17:55:55 -- common/autotest_common.sh@829 -- # '[' -z 74566 ']' 00:09:38.879 17:55:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.879 17:55:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.879 17:55:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.879 17:55:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.879 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:09:39.138 [2024-11-26 17:55:55.816793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:39.138 [2024-11-26 17:55:55.816919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:09:39.138 [2024-11-26 17:55:55.961300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.138 [2024-11-26 17:55:56.002423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.138 [2024-11-26 17:55:56.002635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.703 17:55:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.703 17:55:56 -- common/autotest_common.sh@862 -- # return 0 00:09:39.703 17:55:56 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:39.703 17:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.703 17:55:56 -- common/autotest_common.sh@10 -- # set +x 00:09:40.276 Some configs were skipped because the RPC state that can call them passed over. 00:09:40.276 17:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.276 17:55:56 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:40.276 17:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.276 17:55:56 -- common/autotest_common.sh@10 -- # set +x 00:09:40.276 17:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.276 17:55:56 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:40.276 17:55:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.276 17:55:56 -- common/autotest_common.sh@10 -- # set +x 00:09:40.276 17:55:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.276 17:55:56 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:40.276 { 00:09:40.276 "name": "Nvme0n1p1", 00:09:40.276 "aliases": [ 00:09:40.276 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:40.276 ], 00:09:40.276 "product_name": "GPT Disk", 00:09:40.276 "block_size": 4096, 00:09:40.276 "num_blocks": 774144, 00:09:40.276 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:40.276 "md_size": 64, 00:09:40.276 "md_interleave": false, 00:09:40.276 "dif_type": 0, 00:09:40.276 "assigned_rate_limits": { 00:09:40.276 "rw_ios_per_sec": 0, 00:09:40.276 "rw_mbytes_per_sec": 0, 00:09:40.276 "r_mbytes_per_sec": 0, 00:09:40.276 "w_mbytes_per_sec": 0 00:09:40.276 }, 00:09:40.276 "claimed": false, 00:09:40.276 "zoned": false, 00:09:40.276 "supported_io_types": { 00:09:40.276 "read": true, 00:09:40.276 "write": true, 00:09:40.276 "unmap": true, 00:09:40.276 "write_zeroes": true, 00:09:40.276 "flush": true, 00:09:40.276 "reset": true, 00:09:40.276 "compare": true, 00:09:40.276 "compare_and_write": false, 00:09:40.276 "abort": true, 00:09:40.276 "nvme_admin": false, 00:09:40.276 "nvme_io": false 00:09:40.276 }, 00:09:40.276 "driver_specific": { 00:09:40.276 "gpt": { 00:09:40.276 "base_bdev": "Nvme0n1", 00:09:40.276 "offset_blocks": 256, 00:09:40.276 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:40.276 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:40.276 "partition_name": "SPDK_TEST_first" 00:09:40.276 } 00:09:40.276 } 00:09:40.276 } 00:09:40.276 ]' 00:09:40.276 17:55:56 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:40.276 17:55:57 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:40.276 17:55:57 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:40.276 17:55:57 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:40.276 17:55:57 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:40.276 17:55:57 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:40.276 17:55:57 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:40.276 17:55:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.276 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:09:40.276 17:55:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.276 17:55:57 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:40.276 { 00:09:40.276 "name": "Nvme0n1p2", 00:09:40.276 "aliases": [ 00:09:40.276 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:40.276 ], 00:09:40.276 "product_name": "GPT Disk", 00:09:40.276 "block_size": 4096, 00:09:40.276 "num_blocks": 774143, 00:09:40.276 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:40.276 "md_size": 64, 00:09:40.276 "md_interleave": false, 00:09:40.276 "dif_type": 0, 00:09:40.276 "assigned_rate_limits": { 00:09:40.276 "rw_ios_per_sec": 0, 00:09:40.276 "rw_mbytes_per_sec": 0, 00:09:40.276 "r_mbytes_per_sec": 0, 00:09:40.276 "w_mbytes_per_sec": 0 00:09:40.276 }, 00:09:40.276 "claimed": false, 00:09:40.276 "zoned": false, 00:09:40.276 "supported_io_types": { 00:09:40.276 "read": true, 00:09:40.276 "write": true, 00:09:40.276 "unmap": true, 00:09:40.276 "write_zeroes": true, 00:09:40.276 "flush": true, 00:09:40.276 "reset": true, 00:09:40.276 "compare": true, 00:09:40.276 "compare_and_write": false, 00:09:40.276 "abort": true, 00:09:40.276 "nvme_admin": false, 00:09:40.276 "nvme_io": false 00:09:40.276 }, 00:09:40.276 "driver_specific": { 00:09:40.276 "gpt": { 00:09:40.276 "base_bdev": "Nvme0n1", 00:09:40.277 "offset_blocks": 774400, 00:09:40.277 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:40.277 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:40.277 "partition_name": "SPDK_TEST_second" 00:09:40.277 } 00:09:40.277 } 00:09:40.277 } 00:09:40.277 ]' 00:09:40.277 17:55:57 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:40.277 17:55:57 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:40.277 17:55:57 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:40.277 17:55:57 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:40.277 17:55:57 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:40.544 17:55:57 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:40.544 17:55:57 -- bdev/blockdev.sh@629 -- # killprocess 74566 00:09:40.544 17:55:57 -- common/autotest_common.sh@936 -- # '[' -z 74566 ']' 00:09:40.544 17:55:57 -- common/autotest_common.sh@940 -- # kill -0 74566 00:09:40.544 17:55:57 -- common/autotest_common.sh@941 -- # uname 00:09:40.544 17:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.544 17:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74566 00:09:40.544 17:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.544 killing process with pid 74566 00:09:40.544 17:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.544 17:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74566' 00:09:40.544 17:55:57 -- common/autotest_common.sh@955 -- # kill 74566 00:09:40.544 17:55:57 -- common/autotest_common.sh@960 -- # wait 74566 00:09:40.801 00:09:40.801 real 0m1.940s 00:09:40.801 user 0m2.074s 00:09:40.801 sys 0m0.460s 00:09:40.801 17:55:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:40.801 ************************************ 00:09:40.801 END TEST bdev_gpt_uuid 00:09:40.801 ************************************ 00:09:40.801 17:55:57 -- common/autotest_common.sh@10 -- # set +x 00:09:40.801 17:55:57 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:40.801 17:55:57 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:40.801 17:55:57 -- bdev/blockdev.sh@809 -- # cleanup 00:09:40.801 17:55:57 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:41.060 17:55:57 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.060 17:55:57 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:41.060 17:55:57 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:41.060 17:55:57 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:41.060 17:55:57 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.629 Waiting for block devices as requested 00:09:41.888 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.888 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.147 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.147 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.425 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:47.425 17:56:04 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:47.425 17:56:04 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:47.425 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:47.425 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:47.425 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:47.425 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:47.425 17:56:04 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:47.425 00:09:47.425 real 0m51.783s 00:09:47.425 user 1m2.667s 00:09:47.425 sys 0m11.533s 00:09:47.425 17:56:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:47.425 17:56:04 -- common/autotest_common.sh@10 -- # set +x 00:09:47.425 ************************************ 00:09:47.425 END TEST blockdev_nvme_gpt 00:09:47.425 ************************************ 00:09:47.684 17:56:04 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:47.684 17:56:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:47.684 17:56:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.684 17:56:04 -- common/autotest_common.sh@10 -- # set +x 00:09:47.684 ************************************ 00:09:47.684 START TEST nvme 00:09:47.684 ************************************ 00:09:47.684 17:56:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:47.684 * Looking for test storage... 00:09:47.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.684 17:56:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:47.684 17:56:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:47.684 17:56:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:47.943 17:56:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:47.943 17:56:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:47.943 17:56:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:47.943 17:56:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:47.943 17:56:04 -- scripts/common.sh@335 -- # IFS=.-: 00:09:47.943 17:56:04 -- scripts/common.sh@335 -- # read -ra ver1 00:09:47.943 17:56:04 -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.943 17:56:04 -- scripts/common.sh@336 -- # read -ra ver2 00:09:47.943 17:56:04 -- scripts/common.sh@337 -- # local 'op=<' 00:09:47.943 17:56:04 -- scripts/common.sh@339 -- # ver1_l=2 00:09:47.943 17:56:04 -- scripts/common.sh@340 -- # ver2_l=1 00:09:47.943 17:56:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:47.943 17:56:04 -- scripts/common.sh@343 -- # case "$op" in 00:09:47.943 17:56:04 -- scripts/common.sh@344 -- # : 1 00:09:47.943 17:56:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:47.943 17:56:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.943 17:56:04 -- scripts/common.sh@364 -- # decimal 1 00:09:47.943 17:56:04 -- scripts/common.sh@352 -- # local d=1 00:09:47.943 17:56:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.943 17:56:04 -- scripts/common.sh@354 -- # echo 1 00:09:47.943 17:56:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:47.943 17:56:04 -- scripts/common.sh@365 -- # decimal 2 00:09:47.943 17:56:04 -- scripts/common.sh@352 -- # local d=2 00:09:47.943 17:56:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.943 17:56:04 -- scripts/common.sh@354 -- # echo 2 00:09:47.943 17:56:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:47.943 17:56:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:47.943 17:56:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:47.943 17:56:04 -- scripts/common.sh@367 -- # return 0 00:09:47.943 17:56:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.943 17:56:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:47.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.943 --rc genhtml_branch_coverage=1 00:09:47.943 --rc genhtml_function_coverage=1 00:09:47.943 --rc genhtml_legend=1 00:09:47.943 --rc geninfo_all_blocks=1 00:09:47.943 --rc geninfo_unexecuted_blocks=1 00:09:47.943 00:09:47.943 ' 00:09:47.943 17:56:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:47.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.943 --rc genhtml_branch_coverage=1 00:09:47.943 --rc genhtml_function_coverage=1 00:09:47.943 --rc genhtml_legend=1 00:09:47.943 --rc geninfo_all_blocks=1 00:09:47.944 --rc geninfo_unexecuted_blocks=1 00:09:47.944 00:09:47.944 ' 00:09:47.944 17:56:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:47.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.944 --rc genhtml_branch_coverage=1 00:09:47.944 --rc genhtml_function_coverage=1 00:09:47.944 --rc genhtml_legend=1 00:09:47.944 --rc geninfo_all_blocks=1 00:09:47.944 --rc geninfo_unexecuted_blocks=1 00:09:47.944 00:09:47.944 ' 00:09:47.944 17:56:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:47.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.944 --rc genhtml_branch_coverage=1 00:09:47.944 --rc genhtml_function_coverage=1 00:09:47.944 --rc genhtml_legend=1 00:09:47.944 --rc geninfo_all_blocks=1 00:09:47.944 --rc geninfo_unexecuted_blocks=1 00:09:47.944 00:09:47.944 ' 00:09:47.944 17:56:04 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.320 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.320 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.320 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.320 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.579 17:56:06 -- nvme/nvme.sh@79 -- # uname 00:09:49.579 17:56:06 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:49.579 17:56:06 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:49.579 17:56:06 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:49.579 17:56:06 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:49.579 17:56:06 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:49.579 17:56:06 -- common/autotest_common.sh@1055 -- # echo 0 00:09:49.579 17:56:06 -- common/autotest_common.sh@1057 -- # stubpid=75225 00:09:49.579 Waiting for stub to ready for secondary processes... 00:09:49.579 17:56:06 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:49.579 17:56:06 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:49.579 17:56:06 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:49.579 17:56:06 -- common/autotest_common.sh@1061 -- # [[ -e /proc/75225 ]] 00:09:49.579 17:56:06 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:49.579 [2024-11-26 17:56:06.334407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:49.579 [2024-11-26 17:56:06.334552] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.516 17:56:07 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:50.516 17:56:07 -- common/autotest_common.sh@1061 -- # [[ -e /proc/75225 ]] 00:09:50.516 17:56:07 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:50.516 [2024-11-26 17:56:07.340763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.516 [2024-11-26 17:56:07.366977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.516 [2024-11-26 17:56:07.367134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.516 [2024-11-26 17:56:07.367235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.516 [2024-11-26 17:56:07.387848] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.516 [2024-11-26 17:56:07.402725] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:50.516 [2024-11-26 17:56:07.403191] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:50.516 [2024-11-26 17:56:07.408019] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.516 [2024-11-26 17:56:07.408203] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:50.516 [2024-11-26 17:56:07.408354] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:50.516 [2024-11-26 17:56:07.413907] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.516 [2024-11-26 17:56:07.414110] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:50.516 [2024-11-26 17:56:07.414250] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:50.516 [2024-11-26 17:56:07.419597] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:50.516 [2024-11-26 17:56:07.419831] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:50.516 [2024-11-26 17:56:07.420018] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:50.516 [2024-11-26 17:56:07.420203] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:50.516 [2024-11-26 17:56:07.420401] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:51.453 17:56:08 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:51.453 done. 00:09:51.453 17:56:08 -- common/autotest_common.sh@1064 -- # echo done. 00:09:51.453 17:56:08 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.453 17:56:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:51.453 17:56:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.453 17:56:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.453 ************************************ 00:09:51.453 START TEST nvme_reset 00:09:51.453 ************************************ 00:09:51.453 17:56:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:51.712 Initializing NVMe Controllers 00:09:51.712 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:51.712 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:51.712 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:51.712 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:51.712 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:51.712 00:09:51.712 ************************************ 00:09:51.712 END TEST nvme_reset 00:09:51.712 ************************************ 00:09:51.712 real 0m0.255s 00:09:51.712 user 0m0.081s 00:09:51.712 sys 0m0.129s 00:09:51.712 17:56:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.712 17:56:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.712 17:56:08 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:51.712 17:56:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.712 17:56:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.712 17:56:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.712 ************************************ 00:09:51.712 START TEST nvme_identify 00:09:51.712 ************************************ 00:09:51.712 17:56:08 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:51.712 17:56:08 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:51.712 17:56:08 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:51.712 17:56:08 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.971 17:56:08 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:51.971 17:56:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:51.971 17:56:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:51.971 17:56:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.971 17:56:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.971 17:56:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:51.971 17:56:08 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:51.971 17:56:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:51.971 17:56:08 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:52.231 [2024-11-26 17:56:08.966417] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 75267 terminated unexpected 00:09:52.231 ===================================================== 00:09:52.231 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.231 ===================================================== 00:09:52.232 Controller Capabilities/Features 00:09:52.232 ================================ 00:09:52.232 Vendor ID: 1b36 00:09:52.232 Subsystem Vendor ID: 1af4 00:09:52.232 Serial Number: 12343 00:09:52.232 Model Number: QEMU NVMe Ctrl 00:09:52.232 Firmware Version: 8.0.0 00:09:52.232 Recommended Arb Burst: 6 00:09:52.232 IEEE OUI Identifier: 00 54 52 00:09:52.232 Multi-path I/O 00:09:52.232 May have multiple subsystem ports: No 00:09:52.232 May have multiple controllers: Yes 00:09:52.232 Associated with SR-IOV VF: No 00:09:52.232 Max Data Transfer Size: 524288 00:09:52.232 Max Number of Namespaces: 256 00:09:52.232 Max Number of I/O Queues: 64 00:09:52.232 NVMe Specification Version (VS): 1.4 00:09:52.232 NVMe Specification Version (Identify): 1.4 00:09:52.232 Maximum Queue Entries: 2048 00:09:52.232 Contiguous Queues Required: Yes 00:09:52.232 Arbitration Mechanisms Supported 00:09:52.232 Weighted Round Robin: Not Supported 00:09:52.232 Vendor Specific: Not Supported 00:09:52.232 Reset Timeout: 7500 ms 00:09:52.232 Doorbell Stride: 4 bytes 00:09:52.232 NVM Subsystem Reset: Not Supported 00:09:52.232 Command Sets Supported 00:09:52.232 NVM Command Set: Supported 00:09:52.232 Boot Partition: Not Supported 00:09:52.232 Memory Page Size Minimum: 4096 bytes 00:09:52.232 Memory Page Size Maximum: 65536 bytes 00:09:52.232 Persistent Memory Region: Not Supported 00:09:52.232 Optional Asynchronous Events Supported 00:09:52.232 Namespace Attribute Notices: Supported 00:09:52.232 Firmware Activation Notices: Not Supported 00:09:52.232 ANA Change Notices: Not Supported 00:09:52.232 PLE Aggregate Log Change Notices: Not Supported 00:09:52.232 LBA Status Info Alert Notices: Not Supported 00:09:52.232 EGE Aggregate Log Change Notices: Not Supported 00:09:52.232 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.232 Zone Descriptor Change Notices: Not Supported 00:09:52.232 Discovery Log Change Notices: Not Supported 00:09:52.232 Controller Attributes 00:09:52.232 128-bit Host Identifier: Not Supported 00:09:52.232 Non-Operational Permissive Mode: Not Supported 00:09:52.232 NVM Sets: Not Supported 00:09:52.232 Read Recovery Levels: Not Supported 00:09:52.232 Endurance Groups: Supported 00:09:52.232 Predictable Latency Mode: Not Supported 00:09:52.232 Traffic Based Keep ALive: Not Supported 00:09:52.232 Namespace Granularity: Not Supported 00:09:52.232 SQ Associations: Not Supported 00:09:52.232 UUID List: Not Supported 00:09:52.232 Multi-Domain Subsystem: Not Supported 00:09:52.232 Fixed Capacity Management: Not Supported 00:09:52.232 Variable Capacity Management: Not Supported 00:09:52.232 Delete Endurance Group: Not Supported 00:09:52.232 Delete NVM Set: Not Supported 00:09:52.232 Extended LBA Formats Supported: Supported 00:09:52.232 Flexible Data Placement Supported: Supported 00:09:52.232 00:09:52.232 Controller Memory Buffer Support 00:09:52.232 ================================ 00:09:52.232 Supported: No 00:09:52.232 00:09:52.232 Persistent Memory Region Support 00:09:52.232 ================================ 00:09:52.232 Supported: No 00:09:52.232 00:09:52.232 Admin Command Set Attributes 00:09:52.232 ============================ 00:09:52.232 Security Send/Receive: Not Supported 00:09:52.232 Format NVM: Supported 00:09:52.232 Firmware Activate/Download: Not Supported 00:09:52.232 Namespace Management: Supported 00:09:52.232 Device Self-Test: Not Supported 00:09:52.232 Directives: Supported 00:09:52.232 NVMe-MI: Not Supported 00:09:52.232 Virtualization Management: Not Supported 00:09:52.232 Doorbell Buffer Config: Supported 00:09:52.232 Get LBA Status Capability: Not Supported 00:09:52.232 Command & Feature Lockdown Capability: Not Supported 00:09:52.232 Abort Command Limit: 4 00:09:52.232 Async Event Request Limit: 4 00:09:52.232 Number of Firmware Slots: N/A 00:09:52.232 Firmware Slot 1 Read-Only: N/A 00:09:52.232 Firmware Activation Without Reset: N/A 00:09:52.232 Multiple Update Detection Support: N/A 00:09:52.232 Firmware Update Granularity: No Information Provided 00:09:52.232 Per-Namespace SMART Log: Yes 00:09:52.232 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.232 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.232 Command Effects Log Page: Supported 00:09:52.232 Get Log Page Extended Data: Supported 00:09:52.232 Telemetry Log Pages: Not Supported 00:09:52.232 Persistent Event Log Pages: Not Supported 00:09:52.232 Supported Log Pages Log Page: May Support 00:09:52.232 Commands Supported & Effects Log Page: Not Supported 00:09:52.232 Feature Identifiers & Effects Log Page:May Support 00:09:52.232 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.232 Data Area 4 for Telemetry Log: Not Supported 00:09:52.232 Error Log Page Entries Supported: 1 00:09:52.232 Keep Alive: Not Supported 00:09:52.232 00:09:52.232 NVM Command Set Attributes 00:09:52.232 ========================== 00:09:52.232 Submission Queue Entry Size 00:09:52.232 Max: 64 00:09:52.232 Min: 64 00:09:52.232 Completion Queue Entry Size 00:09:52.232 Max: 16 00:09:52.232 Min: 16 00:09:52.232 Number of Namespaces: 256 00:09:52.232 Compare Command: Supported 00:09:52.232 Write Uncorrectable Command: Not Supported 00:09:52.232 Dataset Management Command: Supported 00:09:52.232 Write Zeroes Command: Supported 00:09:52.232 Set Features Save Field: Supported 00:09:52.232 Reservations: Not Supported 00:09:52.232 Timestamp: Supported 00:09:52.232 Copy: Supported 00:09:52.232 Volatile Write Cache: Present 00:09:52.232 Atomic Write Unit (Normal): 1 00:09:52.232 Atomic Write Unit (PFail): 1 00:09:52.232 Atomic Compare & Write Unit: 1 00:09:52.232 Fused Compare & Write: Not Supported 00:09:52.232 Scatter-Gather List 00:09:52.232 SGL Command Set: Supported 00:09:52.232 SGL Keyed: Not Supported 00:09:52.232 SGL Bit Bucket Descriptor: Not Supported 00:09:52.232 SGL Metadata Pointer: Not Supported 00:09:52.232 Oversized SGL: Not Supported 00:09:52.232 SGL Metadata Address: Not Supported 00:09:52.232 SGL Offset: Not Supported 00:09:52.233 Transport SGL Data Block: Not Supported 00:09:52.233 Replay Protected Memory Block: Not Supported 00:09:52.233 00:09:52.233 Firmware Slot Information 00:09:52.233 ========================= 00:09:52.233 Active slot: 1 00:09:52.233 Slot 1 Firmware Revision: 1.0 00:09:52.233 00:09:52.233 00:09:52.233 Commands Supported and Effects 00:09:52.233 ============================== 00:09:52.233 Admin Commands 00:09:52.233 -------------- 00:09:52.233 Delete I/O Submission Queue (00h): Supported 00:09:52.233 Create I/O Submission Queue (01h): Supported 00:09:52.233 Get Log Page (02h): Supported 00:09:52.233 Delete I/O Completion Queue (04h): Supported 00:09:52.233 Create I/O Completion Queue (05h): Supported 00:09:52.233 Identify (06h): Supported 00:09:52.233 Abort (08h): Supported 00:09:52.233 Set Features (09h): Supported 00:09:52.233 Get Features (0Ah): Supported 00:09:52.233 Asynchronous Event Request (0Ch): Supported 00:09:52.233 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.233 Directive Send (19h): Supported 00:09:52.233 Directive Receive (1Ah): Supported 00:09:52.233 Virtualization Management (1Ch): Supported 00:09:52.233 Doorbell Buffer Config (7Ch): Supported 00:09:52.233 Format NVM (80h): Supported LBA-Change 00:09:52.233 I/O Commands 00:09:52.233 ------------ 00:09:52.233 Flush (00h): Supported LBA-Change 00:09:52.233 Write (01h): Supported LBA-Change 00:09:52.233 Read (02h): Supported 00:09:52.233 Compare (05h): Supported 00:09:52.233 Write Zeroes (08h): Supported LBA-Change 00:09:52.233 Dataset Management (09h): Supported LBA-Change 00:09:52.233 Unknown (0Ch): Supported 00:09:52.233 Unknown (12h): Supported 00:09:52.233 Copy (19h): Supported LBA-Change 00:09:52.233 Unknown (1Dh): Supported LBA-Change 00:09:52.233 00:09:52.233 Error Log 00:09:52.233 ========= 00:09:52.233 00:09:52.233 Arbitration 00:09:52.233 =========== 00:09:52.233 Arbitration Burst: no limit 00:09:52.233 00:09:52.233 Power Management 00:09:52.233 ================ 00:09:52.233 Number of Power States: 1 00:09:52.233 Current Power State: Power State #0 00:09:52.233 Power State #0: 00:09:52.233 Max Power: 25.00 W 00:09:52.233 Non-Operational State: Operational 00:09:52.233 Entry Latency: 16 microseconds 00:09:52.233 Exit Latency: 4 microseconds 00:09:52.233 Relative Read Throughput: 0 00:09:52.233 Relative Read Latency: 0 00:09:52.233 Relative Write Throughput: 0 00:09:52.233 Relative Write Latency: 0 00:09:52.233 Idle Power: Not Reported 00:09:52.233 Active Power: Not Reported 00:09:52.233 Non-Operational Permissive Mode: Not Supported 00:09:52.233 00:09:52.233 Health Information 00:09:52.233 ================== 00:09:52.233 Critical Warnings: 00:09:52.233 Available Spare Space: OK 00:09:52.233 Temperature: OK 00:09:52.233 Device Reliability: OK 00:09:52.233 Read Only: No 00:09:52.233 Volatile Memory Backup: OK 00:09:52.233 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.233 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.233 Available Spare: 0% 00:09:52.233 Available Spare Threshold: 0% 00:09:52.233 Life Percentage Used: 0% 00:09:52.233 Data Units Read: 1498 00:09:52.233 Data Units Written: 698 00:09:52.233 Host Read Commands: 60434 00:09:52.233 Host Write Commands: 29702 00:09:52.233 Controller Busy Time: 0 minutes 00:09:52.233 Power Cycles: 0 00:09:52.233 Power On Hours: 0 hours 00:09:52.233 Unsafe Shutdowns: 0 00:09:52.233 Unrecoverable Media Errors: 0 00:09:52.233 Lifetime Error Log Entries: 0 00:09:52.233 Warning Temperature Time: 0 minutes 00:09:52.233 Critical Temperature Time: 0 minutes 00:09:52.233 00:09:52.233 Number of Queues 00:09:52.233 ================ 00:09:52.233 Number of I/O Submission Queues: 64 00:09:52.233 Number of I/O Completion Queues: 64 00:09:52.233 00:09:52.233 ZNS Specific Controller Data 00:09:52.233 ============================ 00:09:52.233 Zone Append Size Limit: 0 00:09:52.233 00:09:52.233 00:09:52.233 Active Namespaces 00:09:52.233 ================= 00:09:52.233 Namespace ID:1 00:09:52.233 Error Recovery Timeout: Unlimited 00:09:52.233 Command Set Identifier: NVM (00h) 00:09:52.233 Deallocate: Supported 00:09:52.233 Deallocated/Unwritten Error: Supported 00:09:52.233 Deallocated Read Value: All 0x00 00:09:52.233 Deallocate in Write Zeroes: Not Supported 00:09:52.233 Deallocated Guard Field: 0xFFFF 00:09:52.233 Flush: Supported 00:09:52.233 Reservation: Not Supported 00:09:52.233 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.233 Size (in LBAs): 262144 (1GiB) 00:09:52.233 Capacity (in LBAs): 262144 (1GiB) 00:09:52.233 Utilization (in LBAs): 262144 (1GiB) 00:09:52.233 Thin Provisioning: Not Supported 00:09:52.233 Per-NS Atomic Units: No 00:09:52.233 Maximum Single Source Range Length: 128 00:09:52.233 Maximum Copy Length: 128 00:09:52.233 Maximum Source Range Count: 128 00:09:52.233 NGUID/EUI64 Never Reused: No 00:09:52.233 Namespace Write Protected: No 00:09:52.233 Endurance group ID: 1 00:09:52.233 Number of LBA Formats: 8 00:09:52.233 Current LBA Format: LBA Format #04 00:09:52.233 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.233 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.233 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.233 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.233 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.233 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.233 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.233 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.233 00:09:52.233 Get Feature FDP: 00:09:52.233 ================ 00:09:52.233 Enabled: Yes 00:09:52.233 FDP configuration index: 0 00:09:52.233 00:09:52.233 FDP configurations log page 00:09:52.233 =========================== 00:09:52.233 Number of FDP configurations: 1 00:09:52.233 Version: 0 00:09:52.233 Size: 112 00:09:52.233 FDP Configuration Descriptor: 0 00:09:52.233 Descriptor Size: 96 00:09:52.233 Reclaim Group Identifier format: 2 00:09:52.233 FDP Volatile Write Cache: Not Present 00:09:52.233 FDP Configuration: Valid 00:09:52.233 Vendor Specific Size: 0 00:09:52.233 Number of Reclaim Groups: 2 00:09:52.233 Number of Recalim Unit Handles: 8 00:09:52.233 Max Placement Identifiers: 128 00:09:52.233 Number of Namespaces Suppprted: 256 00:09:52.233 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.234 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.234 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.234 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.234 00:09:52.234 FDP reclaim unit handle usage log page 00:09:52.234 =================================[2024-11-26 17:56:08.968331] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 75267 terminated unexpected 00:09:52.234 ===== 00:09:52.234 Number of Reclaim Unit Handles: 8 00:09:52.234 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.234 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.234 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.234 00:09:52.234 FDP statistics log page 00:09:52.234 ======================= 00:09:52.234 Host bytes with metadata written: 455462912 00:09:52.234 Media bytes with metadata written: 455516160 00:09:52.234 Media bytes erased: 0 00:09:52.234 00:09:52.234 FDP events log page 00:09:52.234 =================== 00:09:52.234 Number of FDP events: 0 00:09:52.234 00:09:52.234 ===================================================== 00:09:52.234 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.234 ===================================================== 00:09:52.234 Controller Capabilities/Features 00:09:52.234 ================================ 00:09:52.234 Vendor ID: 1b36 00:09:52.234 Subsystem Vendor ID: 1af4 00:09:52.234 Serial Number: 12340 00:09:52.234 Model Number: QEMU NVMe Ctrl 00:09:52.234 Firmware Version: 8.0.0 00:09:52.234 Recommended Arb Burst: 6 00:09:52.234 IEEE OUI Identifier: 00 54 52 00:09:52.234 Multi-path I/O 00:09:52.234 May have multiple subsystem ports: No 00:09:52.234 May have multiple controllers: No 00:09:52.234 Associated with SR-IOV VF: No 00:09:52.234 Max Data Transfer Size: 524288 00:09:52.234 Max Number of Namespaces: 256 00:09:52.234 Max Number of I/O Queues: 64 00:09:52.234 NVMe Specification Version (VS): 1.4 00:09:52.234 NVMe Specification Version (Identify): 1.4 00:09:52.234 Maximum Queue Entries: 2048 00:09:52.234 Contiguous Queues Required: Yes 00:09:52.234 Arbitration Mechanisms Supported 00:09:52.234 Weighted Round Robin: Not Supported 00:09:52.234 Vendor Specific: Not Supported 00:09:52.234 Reset Timeout: 7500 ms 00:09:52.234 Doorbell Stride: 4 bytes 00:09:52.234 NVM Subsystem Reset: Not Supported 00:09:52.234 Command Sets Supported 00:09:52.234 NVM Command Set: Supported 00:09:52.234 Boot Partition: Not Supported 00:09:52.234 Memory Page Size Minimum: 4096 bytes 00:09:52.234 Memory Page Size Maximum: 65536 bytes 00:09:52.234 Persistent Memory Region: Not Supported 00:09:52.234 Optional Asynchronous Events Supported 00:09:52.234 Namespace Attribute Notices: Supported 00:09:52.234 Firmware Activation Notices: Not Supported 00:09:52.234 ANA Change Notices: Not Supported 00:09:52.234 PLE Aggregate Log Change Notices: Not Supported 00:09:52.234 LBA Status Info Alert Notices: Not Supported 00:09:52.234 EGE Aggregate Log Change Notices: Not Supported 00:09:52.234 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.234 Zone Descriptor Change Notices: Not Supported 00:09:52.234 Discovery Log Change Notices: Not Supported 00:09:52.234 Controller Attributes 00:09:52.234 128-bit Host Identifier: Not Supported 00:09:52.234 Non-Operational Permissive Mode: Not Supported 00:09:52.234 NVM Sets: Not Supported 00:09:52.234 Read Recovery Levels: Not Supported 00:09:52.234 Endurance Groups: Not Supported 00:09:52.234 Predictable Latency Mode: Not Supported 00:09:52.234 Traffic Based Keep ALive: Not Supported 00:09:52.234 Namespace Granularity: Not Supported 00:09:52.234 SQ Associations: Not Supported 00:09:52.234 UUID List: Not Supported 00:09:52.234 Multi-Domain Subsystem: Not Supported 00:09:52.234 Fixed Capacity Management: Not Supported 00:09:52.234 Variable Capacity Management: Not Supported 00:09:52.234 Delete Endurance Group: Not Supported 00:09:52.234 Delete NVM Set: Not Supported 00:09:52.234 Extended LBA Formats Supported: Supported 00:09:52.234 Flexible Data Placement Supported: Not Supported 00:09:52.234 00:09:52.234 Controller Memory Buffer Support 00:09:52.234 ================================ 00:09:52.234 Supported: No 00:09:52.234 00:09:52.234 Persistent Memory Region Support 00:09:52.234 ================================ 00:09:52.234 Supported: No 00:09:52.234 00:09:52.234 Admin Command Set Attributes 00:09:52.234 ============================ 00:09:52.234 Security Send/Receive: Not Supported 00:09:52.234 Format NVM: Supported 00:09:52.234 Firmware Activate/Download: Not Supported 00:09:52.234 Namespace Management: Supported 00:09:52.234 Device Self-Test: Not Supported 00:09:52.234 Directives: Supported 00:09:52.234 NVMe-MI: Not Supported 00:09:52.234 Virtualization Management: Not Supported 00:09:52.234 Doorbell Buffer Config: Supported 00:09:52.234 Get LBA Status Capability: Not Supported 00:09:52.234 Command & Feature Lockdown Capability: Not Supported 00:09:52.234 Abort Command Limit: 4 00:09:52.234 Async Event Request Limit: 4 00:09:52.234 Number of Firmware Slots: N/A 00:09:52.234 Firmware Slot 1 Read-Only: N/A 00:09:52.234 Firmware Activation Without Reset: N/A 00:09:52.234 Multiple Update Detection Support: N/A 00:09:52.234 Firmware Update Granularity: No Information Provided 00:09:52.234 Per-Namespace SMART Log: Yes 00:09:52.234 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.234 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.234 Command Effects Log Page: Supported 00:09:52.234 Get Log Page Extended Data: Supported 00:09:52.234 Telemetry Log Pages: Not Supported 00:09:52.234 Persistent Event Log Pages: Not Supported 00:09:52.234 Supported Log Pages Log Page: May Support 00:09:52.234 Commands Supported & Effects Log Page: Not Supported 00:09:52.234 Feature Identifiers & Effects Log Page:May Support 00:09:52.234 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.234 Data Area 4 for Telemetry Log: Not Supported 00:09:52.234 Error Log Page Entries Supported: 1 00:09:52.234 Keep Alive: Not Supported 00:09:52.234 00:09:52.234 NVM Command Set Attributes 00:09:52.234 ========================== 00:09:52.234 Submission Queue Entry Size 00:09:52.234 Max: 64 00:09:52.234 Min: 64 00:09:52.234 Completion Queue Entry Size 00:09:52.235 Max: 16 00:09:52.235 Min: 16 00:09:52.235 Number of Namespaces: 256 00:09:52.235 Compare Command: Supported 00:09:52.235 Write Uncorrectable Command: Not Supported 00:09:52.235 Dataset Management Command: Supported 00:09:52.235 Write Zeroes Command: Supported 00:09:52.235 Set Features Save Field: Supported 00:09:52.235 Reservations: Not Supported 00:09:52.235 Timestamp: Supported 00:09:52.235 Copy: Supported 00:09:52.235 Volatile Write Cache: Present 00:09:52.235 Atomic Write Unit (Normal): 1 00:09:52.235 Atomic Write Unit (PFail): 1 00:09:52.235 Atomic Compare & Write Unit: 1 00:09:52.235 Fused Compare & Write: Not Supported 00:09:52.235 Scatter-Gather List 00:09:52.235 SGL Command Set: Supported 00:09:52.235 SGL Keyed: Not Supported 00:09:52.235 SGL Bit Bucket Descriptor: Not Supported 00:09:52.235 SGL Metadata Pointer: Not Supported 00:09:52.235 Oversized SGL: Not Supported 00:09:52.235 SGL Metadata Address: Not Supported 00:09:52.235 SGL Offset: Not Supported 00:09:52.235 Transport SGL Data Block: Not Supported 00:09:52.235 Replay Protected Memory Block: Not Supported 00:09:52.235 00:09:52.235 Firmware Slot Information 00:09:52.235 ========================= 00:09:52.235 Active slot: 1 00:09:52.235 Slot 1 Firmware Revision: 1.0 00:09:52.235 00:09:52.235 00:09:52.235 Commands Supported and Effects 00:09:52.235 ============================== 00:09:52.235 Admin Commands 00:09:52.235 -------------- 00:09:52.235 Delete I/O Submission Queue (00h): Supported 00:09:52.235 Create I/O Submission Queue (01h): Supported 00:09:52.235 Get Log Page (02h): Supported 00:09:52.235 Delete I/O Completion Queue (04h): Supported 00:09:52.235 Create I/O Completion Queue (05h): Supported 00:09:52.235 Identify (06h): Supported 00:09:52.235 Abort (08h): Supported 00:09:52.235 Set Features (09h): Supported 00:09:52.235 Get Features (0Ah): Supported 00:09:52.235 Asynchronous Event Request (0Ch): Supported 00:09:52.235 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.235 Directive Send (19h): Supported 00:09:52.235 Directive Receive (1Ah): Supported 00:09:52.235 Virtualization Management (1Ch): Supported 00:09:52.235 Doorbell Buffer Config (7Ch): Supported 00:09:52.235 Format NVM (80h): Supported LBA-Change 00:09:52.235 I/O Commands 00:09:52.235 ------------ 00:09:52.235 Flush (00h): Supported LBA-Change 00:09:52.235 Write (01h): Supported LBA-Change 00:09:52.235 Read (02h): Supported 00:09:52.235 Compare (05h): Supported 00:09:52.235 Write Zeroes (08h): Supported LBA-Change 00:09:52.235 Dataset Management (09h): Supported LBA-Change 00:09:52.235 Unknown (0Ch): Supported 00:09:52.235 [2024-11-26 17:56:08.969584] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 75267 terminated unexpected 00:09:52.235 Unknown (12h): Supported 00:09:52.235 Copy (19h): Supported LBA-Change 00:09:52.235 Unknown (1Dh): Supported LBA-Change 00:09:52.235 00:09:52.235 Error Log 00:09:52.235 ========= 00:09:52.235 00:09:52.235 Arbitration 00:09:52.235 =========== 00:09:52.235 Arbitration Burst: no limit 00:09:52.235 00:09:52.235 Power Management 00:09:52.235 ================ 00:09:52.235 Number of Power States: 1 00:09:52.235 Current Power State: Power State #0 00:09:52.235 Power State #0: 00:09:52.235 Max Power: 25.00 W 00:09:52.235 Non-Operational State: Operational 00:09:52.235 Entry Latency: 16 microseconds 00:09:52.235 Exit Latency: 4 microseconds 00:09:52.235 Relative Read Throughput: 0 00:09:52.235 Relative Read Latency: 0 00:09:52.235 Relative Write Throughput: 0 00:09:52.235 Relative Write Latency: 0 00:09:52.235 Idle Power: Not Reported 00:09:52.235 Active Power: Not Reported 00:09:52.235 Non-Operational Permissive Mode: Not Supported 00:09:52.235 00:09:52.235 Health Information 00:09:52.235 ================== 00:09:52.235 Critical Warnings: 00:09:52.235 Available Spare Space: OK 00:09:52.235 Temperature: OK 00:09:52.235 Device Reliability: OK 00:09:52.235 Read Only: No 00:09:52.235 Volatile Memory Backup: OK 00:09:52.235 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.235 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.235 Available Spare: 0% 00:09:52.235 Available Spare Threshold: 0% 00:09:52.235 Life Percentage Used: 0% 00:09:52.235 Data Units Read: 2062 00:09:52.235 Data Units Written: 946 00:09:52.235 Host Read Commands: 89957 00:09:52.235 Host Write Commands: 44618 00:09:52.235 Controller Busy Time: 0 minutes 00:09:52.235 Power Cycles: 0 00:09:52.235 Power On Hours: 0 hours 00:09:52.235 Unsafe Shutdowns: 0 00:09:52.235 Unrecoverable Media Errors: 0 00:09:52.235 Lifetime Error Log Entries: 0 00:09:52.235 Warning Temperature Time: 0 minutes 00:09:52.235 Critical Temperature Time: 0 minutes 00:09:52.235 00:09:52.235 Number of Queues 00:09:52.235 ================ 00:09:52.235 Number of I/O Submission Queues: 64 00:09:52.235 Number of I/O Completion Queues: 64 00:09:52.235 00:09:52.235 ZNS Specific Controller Data 00:09:52.235 ============================ 00:09:52.235 Zone Append Size Limit: 0 00:09:52.235 00:09:52.235 00:09:52.235 Active Namespaces 00:09:52.235 ================= 00:09:52.235 Namespace ID:1 00:09:52.235 Error Recovery Timeout: Unlimited 00:09:52.235 Command Set Identifier: NVM (00h) 00:09:52.235 Deallocate: Supported 00:09:52.235 Deallocated/Unwritten Error: Supported 00:09:52.235 Deallocated Read Value: All 0x00 00:09:52.235 Deallocate in Write Zeroes: Not Supported 00:09:52.235 Deallocated Guard Field: 0xFFFF 00:09:52.235 Flush: Supported 00:09:52.235 Reservation: Not Supported 00:09:52.235 Metadata Transferred as: Separate Metadata Buffer 00:09:52.235 Namespace Sharing Capabilities: Private 00:09:52.235 Size (in LBAs): 1548666 (5GiB) 00:09:52.235 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.235 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.235 Thin Provisioning: Not Supported 00:09:52.235 Per-NS Atomic Units: No 00:09:52.235 Maximum Single Source Range Length: 128 00:09:52.235 Maximum Copy Length: 128 00:09:52.236 Maximum Source Range Count: 128 00:09:52.236 NGUID/EUI64 Never Reused: No 00:09:52.236 Namespace Write Protected: No 00:09:52.236 Number of LBA Formats: 8 00:09:52.236 Current LBA Format: LBA Format #07 00:09:52.236 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.236 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.236 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.236 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.236 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.236 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.236 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.236 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.236 00:09:52.236 ===================================================== 00:09:52.236 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.236 ===================================================== 00:09:52.236 Controller Capabilities/Features 00:09:52.236 ================================ 00:09:52.236 Vendor ID: 1b36 00:09:52.236 Subsystem Vendor ID: 1af4 00:09:52.236 Serial Number: 12341 00:09:52.236 Model Number: QEMU NVMe Ctrl 00:09:52.236 Firmware Version: 8.0.0 00:09:52.236 Recommended Arb Burst: 6 00:09:52.236 IEEE OUI Identifier: 00 54 52 00:09:52.236 Multi-path I/O 00:09:52.236 May have multiple subsystem ports: No 00:09:52.236 May have multiple controllers: No 00:09:52.236 Associated with SR-IOV VF: No 00:09:52.236 Max Data Transfer Size: 524288 00:09:52.236 Max Number of Namespaces: 256 00:09:52.236 Max Number of I/O Queues: 64 00:09:52.236 NVMe Specification Version (VS): 1.4 00:09:52.236 NVMe Specification Version (Identify): 1.4 00:09:52.236 Maximum Queue Entries: 2048 00:09:52.236 Contiguous Queues Required: Yes 00:09:52.236 Arbitration Mechanisms Supported 00:09:52.236 Weighted Round Robin: Not Supported 00:09:52.236 Vendor Specific: Not Supported 00:09:52.236 Reset Timeout: 7500 ms 00:09:52.236 Doorbell Stride: 4 bytes 00:09:52.236 NVM Subsystem Reset: Not Supported 00:09:52.236 Command Sets Supported 00:09:52.236 NVM Command Set: Supported 00:09:52.236 Boot Partition: Not Supported 00:09:52.236 Memory Page Size Minimum: 4096 bytes 00:09:52.236 Memory Page Size Maximum: 65536 bytes 00:09:52.236 Persistent Memory Region: Not Supported 00:09:52.236 Optional Asynchronous Events Supported 00:09:52.236 Namespace Attribute Notices: Supported 00:09:52.236 Firmware Activation Notices: Not Supported 00:09:52.236 ANA Change Notices: Not Supported 00:09:52.236 PLE Aggregate Log Change Notices: Not Supported 00:09:52.236 LBA Status Info Alert Notices: Not Supported 00:09:52.236 EGE Aggregate Log Change Notices: Not Supported 00:09:52.236 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.236 Zone Descriptor Change Notices: Not Supported 00:09:52.236 Discovery Log Change Notices: Not Supported 00:09:52.236 Controller Attributes 00:09:52.236 128-bit Host Identifier: Not Supported 00:09:52.236 Non-Operational Permissive Mode: Not Supported 00:09:52.236 NVM Sets: Not Supported 00:09:52.236 Read Recovery Levels: Not Supported 00:09:52.236 Endurance Groups: Not Supported 00:09:52.236 Predictable Latency Mode: Not Supported 00:09:52.236 Traffic Based Keep ALive: Not Supported 00:09:52.236 Namespace Granularity: Not Supported 00:09:52.236 SQ Associations: Not Supported 00:09:52.236 UUID List: Not Supported 00:09:52.236 Multi-Domain Subsystem: Not Supported 00:09:52.236 Fixed Capacity Management: Not Supported 00:09:52.236 Variable Capacity Management: Not Supported 00:09:52.236 Delete Endurance Group: Not Supported 00:09:52.236 Delete NVM Set: Not Supported 00:09:52.236 Extended LBA Formats Supported: Supported 00:09:52.236 Flexible Data Placement Supported: Not Supported 00:09:52.236 00:09:52.236 Controller Memory Buffer Support 00:09:52.236 ================================ 00:09:52.236 Supported: No 00:09:52.236 00:09:52.236 Persistent Memory Region Support 00:09:52.236 ================================ 00:09:52.236 Supported: No 00:09:52.236 00:09:52.236 Admin Command Set Attributes 00:09:52.236 ============================ 00:09:52.236 Security Send/Receive: Not Supported 00:09:52.236 Format NVM: Supported 00:09:52.236 Firmware Activate/Download: Not Supported 00:09:52.236 Namespace Management: Supported 00:09:52.236 Device Self-Test: Not Supported 00:09:52.236 Directives: Supported 00:09:52.236 NVMe-MI: Not Supported 00:09:52.236 Virtualization Management: Not Supported 00:09:52.236 Doorbell Buffer Config: Supported 00:09:52.236 Get LBA Status Capability: Not Supported 00:09:52.236 Command & Feature Lockdown Capability: Not Supported 00:09:52.236 Abort Command Limit: 4 00:09:52.236 Async Event Request Limit: 4 00:09:52.236 Number of Firmware Slots: N/A 00:09:52.236 Firmware Slot 1 Read-Only: N/A 00:09:52.236 Firmware Activation Without Reset: N/A 00:09:52.236 Multiple Update Detection Support: N/A 00:09:52.236 Firmware Update Granularity: No Information Provided 00:09:52.236 Per-Namespace SMART Log: Yes 00:09:52.236 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.236 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.236 Command Effects Log Page: Supported 00:09:52.236 Get Log Page Extended Data: Supported 00:09:52.236 Telemetry Log Pages: Not Supported 00:09:52.236 Persistent Event Log Pages: Not Supported 00:09:52.236 Supported Log Pages Log Page: May Support 00:09:52.236 Commands Supported & Effects Log Page: Not Supported 00:09:52.236 Feature Identifiers & Effects Log Page:May Support 00:09:52.236 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.236 Data Area 4 for Telemetry Log: Not Supported 00:09:52.236 Error Log Page Entries Supported: 1 00:09:52.236 Keep Alive: Not Supported 00:09:52.236 00:09:52.236 NVM Command Set Attributes 00:09:52.236 ========================== 00:09:52.236 Submission Queue Entry Size 00:09:52.236 Max: 64 00:09:52.236 Min: 64 00:09:52.236 Completion Queue Entry Size 00:09:52.236 Max: 16 00:09:52.236 Min: 16 00:09:52.236 Number of Namespaces: 256 00:09:52.236 Compare Command: Supported 00:09:52.236 Write Uncorrectable Command: Not Supported 00:09:52.236 Dataset Management Command: Supported 00:09:52.236 Write Zeroes Command: Supported 00:09:52.236 Set Features Save Field: Supported 00:09:52.236 Reservations: Not Supported 00:09:52.236 Timestamp: Supported 00:09:52.237 Copy: Supported 00:09:52.237 Volatile Write Cache: Present 00:09:52.237 Atomic Write Unit (Normal): 1 00:09:52.237 Atomic Write Unit (PFail): 1 00:09:52.237 Atomic Compare & Write Unit: 1 00:09:52.237 Fused Compare & Write: Not Supported 00:09:52.237 Scatter-Gather List 00:09:52.237 SGL Command Set: Supported 00:09:52.237 SGL Keyed: Not Supported 00:09:52.237 SGL Bit Bucket Descriptor: Not Supported 00:09:52.237 SGL Metadata Pointer: Not Supported 00:09:52.237 Oversized SGL: Not Supported 00:09:52.237 SGL Metadata Address: Not Supported 00:09:52.237 SGL Offset: Not Supported 00:09:52.237 Transport SGL Data Block: Not Supported 00:09:52.237 Replay Protected Memory Block: Not Supported 00:09:52.237 00:09:52.237 Firmware Slot Information 00:09:52.237 ========================= 00:09:52.237 Active slot: 1 00:09:52.237 Slot 1 Firmware Revision: 1.0 00:09:52.237 00:09:52.237 00:09:52.237 Commands Supported and Effects 00:09:52.237 ============================== 00:09:52.237 Admin Commands 00:09:52.237 -------------- 00:09:52.237 Delete I/O Submission Queue (00h): Supported 00:09:52.237 Create I/O Submission Queue (01h): Supported 00:09:52.237 Get Log Page (02h): Supported 00:09:52.237 Delete I/O Completion Queue (04h): Supported 00:09:52.237 Create I/O Completion Queue (05h): Supported 00:09:52.237 Identify (06h): Supported 00:09:52.237 Abort (08h): Supported 00:09:52.237 Set Features (09h): Supported 00:09:52.237 Get Features (0Ah): Supported 00:09:52.237 Asynchronous Event Request (0Ch): Supported 00:09:52.237 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.237 Directive Send (19h): Supported 00:09:52.237 Directive Receive (1Ah): Supported 00:09:52.237 Virtualization Management (1Ch): Supported 00:09:52.237 Doorbell Buffer Config (7Ch): Supported 00:09:52.237 Format NVM (80h): Supported LBA-Change 00:09:52.237 I/O Commands 00:09:52.237 ------------ 00:09:52.237 Flush (00h): Supported LBA-Change 00:09:52.237 Write (01h): Supported LBA-Change 00:09:52.237 Read (02h): Supported 00:09:52.237 Compare (05h): Supported 00:09:52.237 Write Zeroes (08h): Supported LBA-Change 00:09:52.237 Dataset Management (09h): Supported LBA-Change 00:09:52.237 Unknown (0Ch): Supported 00:09:52.237 Unknown (12h): Supported 00:09:52.237 Copy (19h): Supported LBA-Change 00:09:52.237 Unknown (1Dh): Supported LBA-Change 00:09:52.237 00:09:52.237 Error Log 00:09:52.237 ========= 00:09:52.237 00:09:52.237 Arbitration 00:09:52.237 =========== 00:09:52.237 Arbitration Burst: no limit 00:09:52.237 00:09:52.237 Power Management 00:09:52.237 ================ 00:09:52.237 Number of Power States: 1 00:09:52.237 Current Power State: Power State #0 00:09:52.237 Power State #0: 00:09:52.237 Max Power: 25.00 W 00:09:52.237 Non-Operational State: Operational 00:09:52.237 Entry Latency: 16 microseconds 00:09:52.237 Exit Latency: 4 microseconds 00:09:52.237 Relative Read Throughput: 0 00:09:52.237 Relative Read Latency: 0 00:09:52.237 Relative Write Throughput: 0 00:09:52.237 Relative Write Latency: 0 00:09:52.237 Idle Power: Not Reported 00:09:52.237 Active Power: Not Reported 00:09:52.237 Non-Operational Permissive Mode: Not Supported 00:09:52.237 00:09:52.237 Health Information 00:09:52.237 ================== 00:09:52.237 Critical Warnings: 00:09:52.237 Available Spare Space: OK 00:09:52.237 Temperature: OK 00:09:52.237 Device Reliability: OK 00:09:52.237 Read Only: No 00:09:52.237 Volatile Memory Backup: OK 00:09:52.237 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.237 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.237 Available Spare: 0% 00:09:52.237 Available Spare Threshold: 0% 00:09:52.237 Life Percentage Used: 0% 00:09:52.237 Data Units Read: 1390 00:09:52.237 Data Units Written: [2024-11-26 17:56:08.970666] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 75267 terminated unexpected 00:09:52.237 645 00:09:52.237 Host Read Commands: 59397 00:09:52.237 Host Write Commands: 29200 00:09:52.237 Controller Busy Time: 0 minutes 00:09:52.237 Power Cycles: 0 00:09:52.237 Power On Hours: 0 hours 00:09:52.237 Unsafe Shutdowns: 0 00:09:52.237 Unrecoverable Media Errors: 0 00:09:52.237 Lifetime Error Log Entries: 0 00:09:52.237 Warning Temperature Time: 0 minutes 00:09:52.237 Critical Temperature Time: 0 minutes 00:09:52.237 00:09:52.237 Number of Queues 00:09:52.237 ================ 00:09:52.237 Number of I/O Submission Queues: 64 00:09:52.237 Number of I/O Completion Queues: 64 00:09:52.237 00:09:52.237 ZNS Specific Controller Data 00:09:52.237 ============================ 00:09:52.237 Zone Append Size Limit: 0 00:09:52.237 00:09:52.237 00:09:52.237 Active Namespaces 00:09:52.237 ================= 00:09:52.237 Namespace ID:1 00:09:52.237 Error Recovery Timeout: Unlimited 00:09:52.237 Command Set Identifier: NVM (00h) 00:09:52.237 Deallocate: Supported 00:09:52.237 Deallocated/Unwritten Error: Supported 00:09:52.237 Deallocated Read Value: All 0x00 00:09:52.237 Deallocate in Write Zeroes: Not Supported 00:09:52.237 Deallocated Guard Field: 0xFFFF 00:09:52.237 Flush: Supported 00:09:52.237 Reservation: Not Supported 00:09:52.237 Namespace Sharing Capabilities: Private 00:09:52.237 Size (in LBAs): 1310720 (5GiB) 00:09:52.237 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.237 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.237 Thin Provisioning: Not Supported 00:09:52.237 Per-NS Atomic Units: No 00:09:52.237 Maximum Single Source Range Length: 128 00:09:52.237 Maximum Copy Length: 128 00:09:52.237 Maximum Source Range Count: 128 00:09:52.237 NGUID/EUI64 Never Reused: No 00:09:52.237 Namespace Write Protected: No 00:09:52.237 Number of LBA Formats: 8 00:09:52.237 Current LBA Format: LBA Format #04 00:09:52.237 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.237 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.237 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.237 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.237 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.237 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.237 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.237 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.237 00:09:52.237 ===================================================== 00:09:52.237 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.237 ===================================================== 00:09:52.237 Controller Capabilities/Features 00:09:52.237 ================================ 00:09:52.237 Vendor ID: 1b36 00:09:52.237 Subsystem Vendor ID: 1af4 00:09:52.237 Serial Number: 12342 00:09:52.238 Model Number: QEMU NVMe Ctrl 00:09:52.238 Firmware Version: 8.0.0 00:09:52.238 Recommended Arb Burst: 6 00:09:52.238 IEEE OUI Identifier: 00 54 52 00:09:52.238 Multi-path I/O 00:09:52.238 May have multiple subsystem ports: No 00:09:52.238 May have multiple controllers: No 00:09:52.238 Associated with SR-IOV VF: No 00:09:52.238 Max Data Transfer Size: 524288 00:09:52.238 Max Number of Namespaces: 256 00:09:52.238 Max Number of I/O Queues: 64 00:09:52.238 NVMe Specification Version (VS): 1.4 00:09:52.238 NVMe Specification Version (Identify): 1.4 00:09:52.238 Maximum Queue Entries: 2048 00:09:52.238 Contiguous Queues Required: Yes 00:09:52.238 Arbitration Mechanisms Supported 00:09:52.238 Weighted Round Robin: Not Supported 00:09:52.238 Vendor Specific: Not Supported 00:09:52.238 Reset Timeout: 7500 ms 00:09:52.238 Doorbell Stride: 4 bytes 00:09:52.238 NVM Subsystem Reset: Not Supported 00:09:52.238 Command Sets Supported 00:09:52.238 NVM Command Set: Supported 00:09:52.238 Boot Partition: Not Supported 00:09:52.238 Memory Page Size Minimum: 4096 bytes 00:09:52.238 Memory Page Size Maximum: 65536 bytes 00:09:52.238 Persistent Memory Region: Not Supported 00:09:52.238 Optional Asynchronous Events Supported 00:09:52.238 Namespace Attribute Notices: Supported 00:09:52.238 Firmware Activation Notices: Not Supported 00:09:52.238 ANA Change Notices: Not Supported 00:09:52.238 PLE Aggregate Log Change Notices: Not Supported 00:09:52.238 LBA Status Info Alert Notices: Not Supported 00:09:52.238 EGE Aggregate Log Change Notices: Not Supported 00:09:52.238 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.238 Zone Descriptor Change Notices: Not Supported 00:09:52.238 Discovery Log Change Notices: Not Supported 00:09:52.238 Controller Attributes 00:09:52.238 128-bit Host Identifier: Not Supported 00:09:52.238 Non-Operational Permissive Mode: Not Supported 00:09:52.238 NVM Sets: Not Supported 00:09:52.238 Read Recovery Levels: Not Supported 00:09:52.238 Endurance Groups: Not Supported 00:09:52.238 Predictable Latency Mode: Not Supported 00:09:52.238 Traffic Based Keep ALive: Not Supported 00:09:52.238 Namespace Granularity: Not Supported 00:09:52.238 SQ Associations: Not Supported 00:09:52.238 UUID List: Not Supported 00:09:52.238 Multi-Domain Subsystem: Not Supported 00:09:52.238 Fixed Capacity Management: Not Supported 00:09:52.238 Variable Capacity Management: Not Supported 00:09:52.238 Delete Endurance Group: Not Supported 00:09:52.238 Delete NVM Set: Not Supported 00:09:52.238 Extended LBA Formats Supported: Supported 00:09:52.238 Flexible Data Placement Supported: Not Supported 00:09:52.238 00:09:52.238 Controller Memory Buffer Support 00:09:52.238 ================================ 00:09:52.238 Supported: No 00:09:52.238 00:09:52.238 Persistent Memory Region Support 00:09:52.238 ================================ 00:09:52.238 Supported: No 00:09:52.238 00:09:52.238 Admin Command Set Attributes 00:09:52.238 ============================ 00:09:52.238 Security Send/Receive: Not Supported 00:09:52.238 Format NVM: Supported 00:09:52.238 Firmware Activate/Download: Not Supported 00:09:52.238 Namespace Management: Supported 00:09:52.238 Device Self-Test: Not Supported 00:09:52.238 Directives: Supported 00:09:52.238 NVMe-MI: Not Supported 00:09:52.238 Virtualization Management: Not Supported 00:09:52.238 Doorbell Buffer Config: Supported 00:09:52.238 Get LBA Status Capability: Not Supported 00:09:52.238 Command & Feature Lockdown Capability: Not Supported 00:09:52.238 Abort Command Limit: 4 00:09:52.238 Async Event Request Limit: 4 00:09:52.238 Number of Firmware Slots: N/A 00:09:52.238 Firmware Slot 1 Read-Only: N/A 00:09:52.238 Firmware Activation Without Reset: N/A 00:09:52.238 Multiple Update Detection Support: N/A 00:09:52.238 Firmware Update Granularity: No Information Provided 00:09:52.238 Per-Namespace SMART Log: Yes 00:09:52.238 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.238 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:52.238 Command Effects Log Page: Supported 00:09:52.238 Get Log Page Extended Data: Supported 00:09:52.238 Telemetry Log Pages: Not Supported 00:09:52.238 Persistent Event Log Pages: Not Supported 00:09:52.238 Supported Log Pages Log Page: May Support 00:09:52.238 Commands Supported & Effects Log Page: Not Supported 00:09:52.238 Feature Identifiers & Effects Log Page:May Support 00:09:52.238 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.238 Data Area 4 for Telemetry Log: Not Supported 00:09:52.238 Error Log Page Entries Supported: 1 00:09:52.238 Keep Alive: Not Supported 00:09:52.238 00:09:52.238 NVM Command Set Attributes 00:09:52.238 ========================== 00:09:52.238 Submission Queue Entry Size 00:09:52.238 Max: 64 00:09:52.238 Min: 64 00:09:52.238 Completion Queue Entry Size 00:09:52.238 Max: 16 00:09:52.238 Min: 16 00:09:52.238 Number of Namespaces: 256 00:09:52.238 Compare Command: Supported 00:09:52.238 Write Uncorrectable Command: Not Supported 00:09:52.238 Dataset Management Command: Supported 00:09:52.238 Write Zeroes Command: Supported 00:09:52.238 Set Features Save Field: Supported 00:09:52.238 Reservations: Not Supported 00:09:52.238 Timestamp: Supported 00:09:52.238 Copy: Supported 00:09:52.238 Volatile Write Cache: Present 00:09:52.238 Atomic Write Unit (Normal): 1 00:09:52.238 Atomic Write Unit (PFail): 1 00:09:52.238 Atomic Compare & Write Unit: 1 00:09:52.238 Fused Compare & Write: Not Supported 00:09:52.238 Scatter-Gather List 00:09:52.238 SGL Command Set: Supported 00:09:52.238 SGL Keyed: Not Supported 00:09:52.238 SGL Bit Bucket Descriptor: Not Supported 00:09:52.238 SGL Metadata Pointer: Not Supported 00:09:52.238 Oversized SGL: Not Supported 00:09:52.238 SGL Metadata Address: Not Supported 00:09:52.238 SGL Offset: Not Supported 00:09:52.238 Transport SGL Data Block: Not Supported 00:09:52.238 Replay Protected Memory Block: Not Supported 00:09:52.238 00:09:52.238 Firmware Slot Information 00:09:52.238 ========================= 00:09:52.239 Active slot: 1 00:09:52.239 Slot 1 Firmware Revision: 1.0 00:09:52.239 00:09:52.239 00:09:52.239 Commands Supported and Effects 00:09:52.239 ============================== 00:09:52.239 Admin Commands 00:09:52.239 -------------- 00:09:52.239 Delete I/O Submission Queue (00h): Supported 00:09:52.239 Create I/O Submission Queue (01h): Supported 00:09:52.239 Get Log Page (02h): Supported 00:09:52.239 Delete I/O Completion Queue (04h): Supported 00:09:52.239 Create I/O Completion Queue (05h): Supported 00:09:52.239 Identify (06h): Supported 00:09:52.239 Abort (08h): Supported 00:09:52.239 Set Features (09h): Supported 00:09:52.239 Get Features (0Ah): Supported 00:09:52.239 Asynchronous Event Request (0Ch): Supported 00:09:52.239 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.239 Directive Send (19h): Supported 00:09:52.239 Directive Receive (1Ah): Supported 00:09:52.239 Virtualization Management (1Ch): Supported 00:09:52.239 Doorbell Buffer Config (7Ch): Supported 00:09:52.239 Format NVM (80h): Supported LBA-Change 00:09:52.239 I/O Commands 00:09:52.239 ------------ 00:09:52.239 Flush (00h): Supported LBA-Change 00:09:52.239 Write (01h): Supported LBA-Change 00:09:52.239 Read (02h): Supported 00:09:52.239 Compare (05h): Supported 00:09:52.239 Write Zeroes (08h): Supported LBA-Change 00:09:52.239 Dataset Management (09h): Supported LBA-Change 00:09:52.239 Unknown (0Ch): Supported 00:09:52.239 Unknown (12h): Supported 00:09:52.239 Copy (19h): Supported LBA-Change 00:09:52.239 Unknown (1Dh): Supported LBA-Change 00:09:52.239 00:09:52.239 Error Log 00:09:52.239 ========= 00:09:52.239 00:09:52.239 Arbitration 00:09:52.239 =========== 00:09:52.239 Arbitration Burst: no limit 00:09:52.239 00:09:52.239 Power Management 00:09:52.239 ================ 00:09:52.239 Number of Power States: 1 00:09:52.239 Current Power State: Power State #0 00:09:52.239 Power State #0: 00:09:52.239 Max Power: 25.00 W 00:09:52.239 Non-Operational State: Operational 00:09:52.239 Entry Latency: 16 microseconds 00:09:52.239 Exit Latency: 4 microseconds 00:09:52.239 Relative Read Throughput: 0 00:09:52.239 Relative Read Latency: 0 00:09:52.239 Relative Write Throughput: 0 00:09:52.239 Relative Write Latency: 0 00:09:52.239 Idle Power: Not Reported 00:09:52.239 Active Power: Not Reported 00:09:52.239 Non-Operational Permissive Mode: Not Supported 00:09:52.239 00:09:52.239 Health Information 00:09:52.239 ================== 00:09:52.239 Critical Warnings: 00:09:52.239 Available Spare Space: OK 00:09:52.239 Temperature: OK 00:09:52.239 Device Reliability: OK 00:09:52.239 Read Only: No 00:09:52.239 Volatile Memory Backup: OK 00:09:52.239 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.239 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.239 Available Spare: 0% 00:09:52.239 Available Spare Threshold: 0% 00:09:52.239 Life Percentage Used: 0% 00:09:52.239 Data Units Read: 4288 00:09:52.239 Data Units Written: 1984 00:09:52.239 Host Read Commands: 179976 00:09:52.239 Host Write Commands: 88205 00:09:52.239 Controller Busy Time: 0 minutes 00:09:52.239 Power Cycles: 0 00:09:52.239 Power On Hours: 0 hours 00:09:52.239 Unsafe Shutdowns: 0 00:09:52.239 Unrecoverable Media Errors: 0 00:09:52.239 Lifetime Error Log Entries: 0 00:09:52.239 Warning Temperature Time: 0 minutes 00:09:52.239 Critical Temperature Time: 0 minutes 00:09:52.239 00:09:52.239 Number of Queues 00:09:52.239 ================ 00:09:52.239 Number of I/O Submission Queues: 64 00:09:52.239 Number of I/O Completion Queues: 64 00:09:52.239 00:09:52.239 ZNS Specific Controller Data 00:09:52.239 ============================ 00:09:52.239 Zone Append Size Limit: 0 00:09:52.239 00:09:52.239 00:09:52.239 Active Namespaces 00:09:52.239 ================= 00:09:52.239 Namespace ID:1 00:09:52.239 Error Recovery Timeout: Unlimited 00:09:52.239 Command Set Identifier: NVM (00h) 00:09:52.239 Deallocate: Supported 00:09:52.239 Deallocated/Unwritten Error: Supported 00:09:52.239 Deallocated Read Value: All 0x00 00:09:52.239 Deallocate in Write Zeroes: Not Supported 00:09:52.239 Deallocated Guard Field: 0xFFFF 00:09:52.239 Flush: Supported 00:09:52.239 Reservation: Not Supported 00:09:52.239 Namespace Sharing Capabilities: Private 00:09:52.239 Size (in LBAs): 1048576 (4GiB) 00:09:52.239 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.239 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.239 Thin Provisioning: Not Supported 00:09:52.239 Per-NS Atomic Units: No 00:09:52.239 Maximum Single Source Range Length: 128 00:09:52.239 Maximum Copy Length: 128 00:09:52.239 Maximum Source Range Count: 128 00:09:52.239 NGUID/EUI64 Never Reused: No 00:09:52.239 Namespace Write Protected: No 00:09:52.239 Number of LBA Formats: 8 00:09:52.239 Current LBA Format: LBA Format #04 00:09:52.239 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.239 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.239 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.239 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.239 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.239 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.239 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.239 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.239 00:09:52.239 Namespace ID:2 00:09:52.239 Error Recovery Timeout: Unlimited 00:09:52.239 Command Set Identifier: NVM (00h) 00:09:52.239 Deallocate: Supported 00:09:52.239 Deallocated/Unwritten Error: Supported 00:09:52.239 Deallocated Read Value: All 0x00 00:09:52.239 Deallocate in Write Zeroes: Not Supported 00:09:52.239 Deallocated Guard Field: 0xFFFF 00:09:52.239 Flush: Supported 00:09:52.239 Reservation: Not Supported 00:09:52.239 Namespace Sharing Capabilities: Private 00:09:52.239 Size (in LBAs): 1048576 (4GiB) 00:09:52.239 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.239 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.239 Thin Provisioning: Not Supported 00:09:52.239 Per-NS Atomic Units: No 00:09:52.239 Maximum Single Source Range Length: 128 00:09:52.239 Maximum Copy Length: 128 00:09:52.239 Maximum Source Range Count: 128 00:09:52.239 NGUID/EUI64 Never Reused: No 00:09:52.239 Namespace Write Protected: No 00:09:52.239 Number of LBA Formats: 8 00:09:52.239 Current LBA Format: LBA Format #04 00:09:52.239 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.239 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.239 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.239 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.239 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.239 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.240 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.240 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.240 00:09:52.240 Namespace ID:3 00:09:52.240 Error Recovery Timeout: Unlimited 00:09:52.240 Command Set Identifier: NVM (00h) 00:09:52.240 Deallocate: Supported 00:09:52.240 Deallocated/Unwritten Error: Supported 00:09:52.240 Deallocated Read Value: All 0x00 00:09:52.240 Deallocate in Write Zeroes: Not Supported 00:09:52.240 Deallocated Guard Field: 0xFFFF 00:09:52.240 Flush: Supported 00:09:52.240 Reservation: Not Supported 00:09:52.240 Namespace Sharing Capabilities: Private 00:09:52.240 Size (in LBAs): 1048576 (4GiB) 00:09:52.240 Capacity (in LBAs): 1048576 (4GiB) 00:09:52.240 Utilization (in LBAs): 1048576 (4GiB) 00:09:52.240 Thin Provisioning: Not Supported 00:09:52.240 Per-NS Atomic Units: No 00:09:52.240 Maximum Single Source Range Length: 128 00:09:52.240 Maximum Copy Length: 128 00:09:52.240 Maximum Source Range Count: 128 00:09:52.240 NGUID/EUI64 Never Reused: No 00:09:52.240 Namespace Write Protected: No 00:09:52.240 Number of LBA Formats: 8 00:09:52.240 Current LBA Format: LBA Format #04 00:09:52.240 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.240 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.240 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.240 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.240 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.240 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.240 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.240 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.240 00:09:52.240 17:56:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.240 17:56:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:52.499 ===================================================== 00:09:52.499 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.499 ===================================================== 00:09:52.499 Controller Capabilities/Features 00:09:52.499 ================================ 00:09:52.499 Vendor ID: 1b36 00:09:52.499 Subsystem Vendor ID: 1af4 00:09:52.499 Serial Number: 12340 00:09:52.499 Model Number: QEMU NVMe Ctrl 00:09:52.499 Firmware Version: 8.0.0 00:09:52.499 Recommended Arb Burst: 6 00:09:52.499 IEEE OUI Identifier: 00 54 52 00:09:52.499 Multi-path I/O 00:09:52.499 May have multiple subsystem ports: No 00:09:52.499 May have multiple controllers: No 00:09:52.499 Associated with SR-IOV VF: No 00:09:52.499 Max Data Transfer Size: 524288 00:09:52.499 Max Number of Namespaces: 256 00:09:52.499 Max Number of I/O Queues: 64 00:09:52.499 NVMe Specification Version (VS): 1.4 00:09:52.499 NVMe Specification Version (Identify): 1.4 00:09:52.499 Maximum Queue Entries: 2048 00:09:52.499 Contiguous Queues Required: Yes 00:09:52.499 Arbitration Mechanisms Supported 00:09:52.499 Weighted Round Robin: Not Supported 00:09:52.499 Vendor Specific: Not Supported 00:09:52.499 Reset Timeout: 7500 ms 00:09:52.499 Doorbell Stride: 4 bytes 00:09:52.499 NVM Subsystem Reset: Not Supported 00:09:52.499 Command Sets Supported 00:09:52.499 NVM Command Set: Supported 00:09:52.499 Boot Partition: Not Supported 00:09:52.499 Memory Page Size Minimum: 4096 bytes 00:09:52.499 Memory Page Size Maximum: 65536 bytes 00:09:52.499 Persistent Memory Region: Not Supported 00:09:52.499 Optional Asynchronous Events Supported 00:09:52.499 Namespace Attribute Notices: Supported 00:09:52.499 Firmware Activation Notices: Not Supported 00:09:52.499 ANA Change Notices: Not Supported 00:09:52.499 PLE Aggregate Log Change Notices: Not Supported 00:09:52.499 LBA Status Info Alert Notices: Not Supported 00:09:52.499 EGE Aggregate Log Change Notices: Not Supported 00:09:52.499 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.499 Zone Descriptor Change Notices: Not Supported 00:09:52.499 Discovery Log Change Notices: Not Supported 00:09:52.499 Controller Attributes 00:09:52.499 128-bit Host Identifier: Not Supported 00:09:52.499 Non-Operational Permissive Mode: Not Supported 00:09:52.499 NVM Sets: Not Supported 00:09:52.499 Read Recovery Levels: Not Supported 00:09:52.499 Endurance Groups: Not Supported 00:09:52.499 Predictable Latency Mode: Not Supported 00:09:52.499 Traffic Based Keep ALive: Not Supported 00:09:52.499 Namespace Granularity: Not Supported 00:09:52.499 SQ Associations: Not Supported 00:09:52.499 UUID List: Not Supported 00:09:52.499 Multi-Domain Subsystem: Not Supported 00:09:52.499 Fixed Capacity Management: Not Supported 00:09:52.499 Variable Capacity Management: Not Supported 00:09:52.499 Delete Endurance Group: Not Supported 00:09:52.499 Delete NVM Set: Not Supported 00:09:52.499 Extended LBA Formats Supported: Supported 00:09:52.499 Flexible Data Placement Supported: Not Supported 00:09:52.499 00:09:52.499 Controller Memory Buffer Support 00:09:52.499 ================================ 00:09:52.499 Supported: No 00:09:52.499 00:09:52.499 Persistent Memory Region Support 00:09:52.499 ================================ 00:09:52.499 Supported: No 00:09:52.499 00:09:52.499 Admin Command Set Attributes 00:09:52.499 ============================ 00:09:52.499 Security Send/Receive: Not Supported 00:09:52.499 Format NVM: Supported 00:09:52.499 Firmware Activate/Download: Not Supported 00:09:52.499 Namespace Management: Supported 00:09:52.499 Device Self-Test: Not Supported 00:09:52.499 Directives: Supported 00:09:52.499 NVMe-MI: Not Supported 00:09:52.499 Virtualization Management: Not Supported 00:09:52.499 Doorbell Buffer Config: Supported 00:09:52.499 Get LBA Status Capability: Not Supported 00:09:52.499 Command & Feature Lockdown Capability: Not Supported 00:09:52.499 Abort Command Limit: 4 00:09:52.499 Async Event Request Limit: 4 00:09:52.499 Number of Firmware Slots: N/A 00:09:52.499 Firmware Slot 1 Read-Only: N/A 00:09:52.499 Firmware Activation Without Reset: N/A 00:09:52.499 Multiple Update Detection Support: N/A 00:09:52.499 Firmware Update Granularity: No Information Provided 00:09:52.499 Per-Namespace SMART Log: Yes 00:09:52.499 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.499 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:52.499 Command Effects Log Page: Supported 00:09:52.499 Get Log Page Extended Data: Supported 00:09:52.499 Telemetry Log Pages: Not Supported 00:09:52.499 Persistent Event Log Pages: Not Supported 00:09:52.499 Supported Log Pages Log Page: May Support 00:09:52.499 Commands Supported & Effects Log Page: Not Supported 00:09:52.499 Feature Identifiers & Effects Log Page:May Support 00:09:52.499 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.499 Data Area 4 for Telemetry Log: Not Supported 00:09:52.499 Error Log Page Entries Supported: 1 00:09:52.499 Keep Alive: Not Supported 00:09:52.499 00:09:52.499 NVM Command Set Attributes 00:09:52.499 ========================== 00:09:52.499 Submission Queue Entry Size 00:09:52.499 Max: 64 00:09:52.499 Min: 64 00:09:52.499 Completion Queue Entry Size 00:09:52.499 Max: 16 00:09:52.499 Min: 16 00:09:52.499 Number of Namespaces: 256 00:09:52.499 Compare Command: Supported 00:09:52.499 Write Uncorrectable Command: Not Supported 00:09:52.499 Dataset Management Command: Supported 00:09:52.499 Write Zeroes Command: Supported 00:09:52.499 Set Features Save Field: Supported 00:09:52.499 Reservations: Not Supported 00:09:52.499 Timestamp: Supported 00:09:52.499 Copy: Supported 00:09:52.499 Volatile Write Cache: Present 00:09:52.499 Atomic Write Unit (Normal): 1 00:09:52.499 Atomic Write Unit (PFail): 1 00:09:52.499 Atomic Compare & Write Unit: 1 00:09:52.499 Fused Compare & Write: Not Supported 00:09:52.499 Scatter-Gather List 00:09:52.499 SGL Command Set: Supported 00:09:52.499 SGL Keyed: Not Supported 00:09:52.499 SGL Bit Bucket Descriptor: Not Supported 00:09:52.499 SGL Metadata Pointer: Not Supported 00:09:52.499 Oversized SGL: Not Supported 00:09:52.499 SGL Metadata Address: Not Supported 00:09:52.499 SGL Offset: Not Supported 00:09:52.499 Transport SGL Data Block: Not Supported 00:09:52.499 Replay Protected Memory Block: Not Supported 00:09:52.499 00:09:52.499 Firmware Slot Information 00:09:52.499 ========================= 00:09:52.499 Active slot: 1 00:09:52.499 Slot 1 Firmware Revision: 1.0 00:09:52.499 00:09:52.499 00:09:52.499 Commands Supported and Effects 00:09:52.499 ============================== 00:09:52.499 Admin Commands 00:09:52.499 -------------- 00:09:52.499 Delete I/O Submission Queue (00h): Supported 00:09:52.499 Create I/O Submission Queue (01h): Supported 00:09:52.499 Get Log Page (02h): Supported 00:09:52.499 Delete I/O Completion Queue (04h): Supported 00:09:52.499 Create I/O Completion Queue (05h): Supported 00:09:52.499 Identify (06h): Supported 00:09:52.499 Abort (08h): Supported 00:09:52.499 Set Features (09h): Supported 00:09:52.499 Get Features (0Ah): Supported 00:09:52.499 Asynchronous Event Request (0Ch): Supported 00:09:52.499 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.499 Directive Send (19h): Supported 00:09:52.499 Directive Receive (1Ah): Supported 00:09:52.499 Virtualization Management (1Ch): Supported 00:09:52.499 Doorbell Buffer Config (7Ch): Supported 00:09:52.499 Format NVM (80h): Supported LBA-Change 00:09:52.499 I/O Commands 00:09:52.499 ------------ 00:09:52.499 Flush (00h): Supported LBA-Change 00:09:52.499 Write (01h): Supported LBA-Change 00:09:52.499 Read (02h): Supported 00:09:52.499 Compare (05h): Supported 00:09:52.499 Write Zeroes (08h): Supported LBA-Change 00:09:52.499 Dataset Management (09h): Supported LBA-Change 00:09:52.499 Unknown (0Ch): Supported 00:09:52.499 Unknown (12h): Supported 00:09:52.499 Copy (19h): Supported LBA-Change 00:09:52.499 Unknown (1Dh): Supported LBA-Change 00:09:52.499 00:09:52.499 Error Log 00:09:52.499 ========= 00:09:52.499 00:09:52.499 Arbitration 00:09:52.499 =========== 00:09:52.499 Arbitration Burst: no limit 00:09:52.499 00:09:52.499 Power Management 00:09:52.499 ================ 00:09:52.499 Number of Power States: 1 00:09:52.499 Current Power State: Power State #0 00:09:52.499 Power State #0: 00:09:52.499 Max Power: 25.00 W 00:09:52.499 Non-Operational State: Operational 00:09:52.499 Entry Latency: 16 microseconds 00:09:52.499 Exit Latency: 4 microseconds 00:09:52.499 Relative Read Throughput: 0 00:09:52.499 Relative Read Latency: 0 00:09:52.499 Relative Write Throughput: 0 00:09:52.499 Relative Write Latency: 0 00:09:52.499 Idle Power: Not Reported 00:09:52.500 Active Power: Not Reported 00:09:52.500 Non-Operational Permissive Mode: Not Supported 00:09:52.500 00:09:52.500 Health Information 00:09:52.500 ================== 00:09:52.500 Critical Warnings: 00:09:52.500 Available Spare Space: OK 00:09:52.500 Temperature: OK 00:09:52.500 Device Reliability: OK 00:09:52.500 Read Only: No 00:09:52.500 Volatile Memory Backup: OK 00:09:52.500 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.500 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.500 Available Spare: 0% 00:09:52.500 Available Spare Threshold: 0% 00:09:52.500 Life Percentage Used: 0% 00:09:52.500 Data Units Read: 2062 00:09:52.500 Data Units Written: 946 00:09:52.500 Host Read Commands: 89957 00:09:52.500 Host Write Commands: 44618 00:09:52.500 Controller Busy Time: 0 minutes 00:09:52.500 Power Cycles: 0 00:09:52.500 Power On Hours: 0 hours 00:09:52.500 Unsafe Shutdowns: 0 00:09:52.500 Unrecoverable Media Errors: 0 00:09:52.500 Lifetime Error Log Entries: 0 00:09:52.500 Warning Temperature Time: 0 minutes 00:09:52.500 Critical Temperature Time: 0 minutes 00:09:52.500 00:09:52.500 Number of Queues 00:09:52.500 ================ 00:09:52.500 Number of I/O Submission Queues: 64 00:09:52.500 Number of I/O Completion Queues: 64 00:09:52.500 00:09:52.500 ZNS Specific Controller Data 00:09:52.500 ============================ 00:09:52.500 Zone Append Size Limit: 0 00:09:52.500 00:09:52.500 00:09:52.500 Active Namespaces 00:09:52.500 ================= 00:09:52.500 Namespace ID:1 00:09:52.500 Error Recovery Timeout: Unlimited 00:09:52.500 Command Set Identifier: NVM (00h) 00:09:52.500 Deallocate: Supported 00:09:52.500 Deallocated/Unwritten Error: Supported 00:09:52.500 Deallocated Read Value: All 0x00 00:09:52.500 Deallocate in Write Zeroes: Not Supported 00:09:52.500 Deallocated Guard Field: 0xFFFF 00:09:52.500 Flush: Supported 00:09:52.500 Reservation: Not Supported 00:09:52.500 Metadata Transferred as: Separate Metadata Buffer 00:09:52.500 Namespace Sharing Capabilities: Private 00:09:52.500 Size (in LBAs): 1548666 (5GiB) 00:09:52.500 Capacity (in LBAs): 1548666 (5GiB) 00:09:52.500 Utilization (in LBAs): 1548666 (5GiB) 00:09:52.500 Thin Provisioning: Not Supported 00:09:52.500 Per-NS Atomic Units: No 00:09:52.500 Maximum Single Source Range Length: 128 00:09:52.500 Maximum Copy Length: 128 00:09:52.500 Maximum Source Range Count: 128 00:09:52.500 NGUID/EUI64 Never Reused: No 00:09:52.500 Namespace Write Protected: No 00:09:52.500 Number of LBA Formats: 8 00:09:52.500 Current LBA Format: LBA Format #07 00:09:52.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.500 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.500 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.500 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.500 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.500 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.500 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.500 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.500 00:09:52.500 17:56:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.500 17:56:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:52.759 ===================================================== 00:09:52.759 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.759 ===================================================== 00:09:52.759 Controller Capabilities/Features 00:09:52.759 ================================ 00:09:52.759 Vendor ID: 1b36 00:09:52.759 Subsystem Vendor ID: 1af4 00:09:52.759 Serial Number: 12341 00:09:52.759 Model Number: QEMU NVMe Ctrl 00:09:52.759 Firmware Version: 8.0.0 00:09:52.759 Recommended Arb Burst: 6 00:09:52.759 IEEE OUI Identifier: 00 54 52 00:09:52.759 Multi-path I/O 00:09:52.759 May have multiple subsystem ports: No 00:09:52.759 May have multiple controllers: No 00:09:52.759 Associated with SR-IOV VF: No 00:09:52.759 Max Data Transfer Size: 524288 00:09:52.759 Max Number of Namespaces: 256 00:09:52.759 Max Number of I/O Queues: 64 00:09:52.759 NVMe Specification Version (VS): 1.4 00:09:52.759 NVMe Specification Version (Identify): 1.4 00:09:52.759 Maximum Queue Entries: 2048 00:09:52.759 Contiguous Queues Required: Yes 00:09:52.759 Arbitration Mechanisms Supported 00:09:52.759 Weighted Round Robin: Not Supported 00:09:52.759 Vendor Specific: Not Supported 00:09:52.759 Reset Timeout: 7500 ms 00:09:52.759 Doorbell Stride: 4 bytes 00:09:52.759 NVM Subsystem Reset: Not Supported 00:09:52.759 Command Sets Supported 00:09:52.759 NVM Command Set: Supported 00:09:52.759 Boot Partition: Not Supported 00:09:52.759 Memory Page Size Minimum: 4096 bytes 00:09:52.759 Memory Page Size Maximum: 65536 bytes 00:09:52.759 Persistent Memory Region: Not Supported 00:09:52.759 Optional Asynchronous Events Supported 00:09:52.759 Namespace Attribute Notices: Supported 00:09:52.759 Firmware Activation Notices: Not Supported 00:09:52.759 ANA Change Notices: Not Supported 00:09:52.759 PLE Aggregate Log Change Notices: Not Supported 00:09:52.759 LBA Status Info Alert Notices: Not Supported 00:09:52.759 EGE Aggregate Log Change Notices: Not Supported 00:09:52.759 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.759 Zone Descriptor Change Notices: Not Supported 00:09:52.759 Discovery Log Change Notices: Not Supported 00:09:52.759 Controller Attributes 00:09:52.759 128-bit Host Identifier: Not Supported 00:09:52.759 Non-Operational Permissive Mode: Not Supported 00:09:52.759 NVM Sets: Not Supported 00:09:52.759 Read Recovery Levels: Not Supported 00:09:52.759 Endurance Groups: Not Supported 00:09:52.759 Predictable Latency Mode: Not Supported 00:09:52.759 Traffic Based Keep ALive: Not Supported 00:09:52.759 Namespace Granularity: Not Supported 00:09:52.759 SQ Associations: Not Supported 00:09:52.759 UUID List: Not Supported 00:09:52.759 Multi-Domain Subsystem: Not Supported 00:09:52.759 Fixed Capacity Management: Not Supported 00:09:52.759 Variable Capacity Management: Not Supported 00:09:52.759 Delete Endurance Group: Not Supported 00:09:52.759 Delete NVM Set: Not Supported 00:09:52.759 Extended LBA Formats Supported: Supported 00:09:52.759 Flexible Data Placement Supported: Not Supported 00:09:52.759 00:09:52.759 Controller Memory Buffer Support 00:09:52.759 ================================ 00:09:52.759 Supported: No 00:09:52.759 00:09:52.759 Persistent Memory Region Support 00:09:52.759 ================================ 00:09:52.759 Supported: No 00:09:52.759 00:09:52.759 Admin Command Set Attributes 00:09:52.759 ============================ 00:09:52.759 Security Send/Receive: Not Supported 00:09:52.759 Format NVM: Supported 00:09:52.759 Firmware Activate/Download: Not Supported 00:09:52.759 Namespace Management: Supported 00:09:52.759 Device Self-Test: Not Supported 00:09:52.759 Directives: Supported 00:09:52.759 NVMe-MI: Not Supported 00:09:52.759 Virtualization Management: Not Supported 00:09:52.759 Doorbell Buffer Config: Supported 00:09:52.759 Get LBA Status Capability: Not Supported 00:09:52.759 Command & Feature Lockdown Capability: Not Supported 00:09:52.759 Abort Command Limit: 4 00:09:52.759 Async Event Request Limit: 4 00:09:52.759 Number of Firmware Slots: N/A 00:09:52.759 Firmware Slot 1 Read-Only: N/A 00:09:52.759 Firmware Activation Without Reset: N/A 00:09:52.759 Multiple Update Detection Support: N/A 00:09:52.759 Firmware Update Granularity: No Information Provided 00:09:52.759 Per-Namespace SMART Log: Yes 00:09:52.759 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.759 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:52.759 Command Effects Log Page: Supported 00:09:52.759 Get Log Page Extended Data: Supported 00:09:52.759 Telemetry Log Pages: Not Supported 00:09:52.759 Persistent Event Log Pages: Not Supported 00:09:52.759 Supported Log Pages Log Page: May Support 00:09:52.759 Commands Supported & Effects Log Page: Not Supported 00:09:52.759 Feature Identifiers & Effects Log Page:May Support 00:09:52.759 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.759 Data Area 4 for Telemetry Log: Not Supported 00:09:52.759 Error Log Page Entries Supported: 1 00:09:52.759 Keep Alive: Not Supported 00:09:52.759 00:09:52.759 NVM Command Set Attributes 00:09:52.759 ========================== 00:09:52.759 Submission Queue Entry Size 00:09:52.759 Max: 64 00:09:52.759 Min: 64 00:09:52.759 Completion Queue Entry Size 00:09:52.759 Max: 16 00:09:52.759 Min: 16 00:09:52.759 Number of Namespaces: 256 00:09:52.759 Compare Command: Supported 00:09:52.759 Write Uncorrectable Command: Not Supported 00:09:52.759 Dataset Management Command: Supported 00:09:52.759 Write Zeroes Command: Supported 00:09:52.759 Set Features Save Field: Supported 00:09:52.759 Reservations: Not Supported 00:09:52.759 Timestamp: Supported 00:09:52.759 Copy: Supported 00:09:52.759 Volatile Write Cache: Present 00:09:52.759 Atomic Write Unit (Normal): 1 00:09:52.759 Atomic Write Unit (PFail): 1 00:09:52.759 Atomic Compare & Write Unit: 1 00:09:52.759 Fused Compare & Write: Not Supported 00:09:52.759 Scatter-Gather List 00:09:52.759 SGL Command Set: Supported 00:09:52.759 SGL Keyed: Not Supported 00:09:52.759 SGL Bit Bucket Descriptor: Not Supported 00:09:52.759 SGL Metadata Pointer: Not Supported 00:09:52.759 Oversized SGL: Not Supported 00:09:52.759 SGL Metadata Address: Not Supported 00:09:52.759 SGL Offset: Not Supported 00:09:52.759 Transport SGL Data Block: Not Supported 00:09:52.759 Replay Protected Memory Block: Not Supported 00:09:52.759 00:09:52.759 Firmware Slot Information 00:09:52.759 ========================= 00:09:52.759 Active slot: 1 00:09:52.759 Slot 1 Firmware Revision: 1.0 00:09:52.759 00:09:52.759 00:09:52.759 Commands Supported and Effects 00:09:52.759 ============================== 00:09:52.759 Admin Commands 00:09:52.759 -------------- 00:09:52.759 Delete I/O Submission Queue (00h): Supported 00:09:52.759 Create I/O Submission Queue (01h): Supported 00:09:52.759 Get Log Page (02h): Supported 00:09:52.759 Delete I/O Completion Queue (04h): Supported 00:09:52.759 Create I/O Completion Queue (05h): Supported 00:09:52.759 Identify (06h): Supported 00:09:52.759 Abort (08h): Supported 00:09:52.759 Set Features (09h): Supported 00:09:52.759 Get Features (0Ah): Supported 00:09:52.759 Asynchronous Event Request (0Ch): Supported 00:09:52.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:52.759 Directive Send (19h): Supported 00:09:52.759 Directive Receive (1Ah): Supported 00:09:52.759 Virtualization Management (1Ch): Supported 00:09:52.759 Doorbell Buffer Config (7Ch): Supported 00:09:52.759 Format NVM (80h): Supported LBA-Change 00:09:52.759 I/O Commands 00:09:52.759 ------------ 00:09:52.759 Flush (00h): Supported LBA-Change 00:09:52.759 Write (01h): Supported LBA-Change 00:09:52.759 Read (02h): Supported 00:09:52.759 Compare (05h): Supported 00:09:52.759 Write Zeroes (08h): Supported LBA-Change 00:09:52.759 Dataset Management (09h): Supported LBA-Change 00:09:52.759 Unknown (0Ch): Supported 00:09:52.759 Unknown (12h): Supported 00:09:52.759 Copy (19h): Supported LBA-Change 00:09:52.759 Unknown (1Dh): Supported LBA-Change 00:09:52.759 00:09:52.759 Error Log 00:09:52.759 ========= 00:09:52.759 00:09:52.759 Arbitration 00:09:52.759 =========== 00:09:52.759 Arbitration Burst: no limit 00:09:52.759 00:09:52.759 Power Management 00:09:52.759 ================ 00:09:52.759 Number of Power States: 1 00:09:52.759 Current Power State: Power State #0 00:09:52.759 Power State #0: 00:09:52.760 Max Power: 25.00 W 00:09:52.760 Non-Operational State: Operational 00:09:52.760 Entry Latency: 16 microseconds 00:09:52.760 Exit Latency: 4 microseconds 00:09:52.760 Relative Read Throughput: 0 00:09:52.760 Relative Read Latency: 0 00:09:52.760 Relative Write Throughput: 0 00:09:52.760 Relative Write Latency: 0 00:09:52.760 Idle Power: Not Reported 00:09:52.760 Active Power: Not Reported 00:09:52.760 Non-Operational Permissive Mode: Not Supported 00:09:52.760 00:09:52.760 Health Information 00:09:52.760 ================== 00:09:52.760 Critical Warnings: 00:09:52.760 Available Spare Space: OK 00:09:52.760 Temperature: OK 00:09:52.760 Device Reliability: OK 00:09:52.760 Read Only: No 00:09:52.760 Volatile Memory Backup: OK 00:09:52.760 Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.760 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:52.760 Available Spare: 0% 00:09:52.760 Available Spare Threshold: 0% 00:09:52.760 Life Percentage Used: 0% 00:09:52.760 Data Units Read: 1390 00:09:52.760 Data Units Written: 645 00:09:52.760 Host Read Commands: 59397 00:09:52.760 Host Write Commands: 29200 00:09:52.760 Controller Busy Time: 0 minutes 00:09:52.760 Power Cycles: 0 00:09:52.760 Power On Hours: 0 hours 00:09:52.760 Unsafe Shutdowns: 0 00:09:52.760 Unrecoverable Media Errors: 0 00:09:52.760 Lifetime Error Log Entries: 0 00:09:52.760 Warning Temperature Time: 0 minutes 00:09:52.760 Critical Temperature Time: 0 minutes 00:09:52.760 00:09:52.760 Number of Queues 00:09:52.760 ================ 00:09:52.760 Number of I/O Submission Queues: 64 00:09:52.760 Number of I/O Completion Queues: 64 00:09:52.760 00:09:52.760 ZNS Specific Controller Data 00:09:52.760 ============================ 00:09:52.760 Zone Append Size Limit: 0 00:09:52.760 00:09:52.760 00:09:52.760 Active Namespaces 00:09:52.760 ================= 00:09:52.760 Namespace ID:1 00:09:52.760 Error Recovery Timeout: Unlimited 00:09:52.760 Command Set Identifier: NVM (00h) 00:09:52.760 Deallocate: Supported 00:09:52.760 Deallocated/Unwritten Error: Supported 00:09:52.760 Deallocated Read Value: All 0x00 00:09:52.760 Deallocate in Write Zeroes: Not Supported 00:09:52.760 Deallocated Guard Field: 0xFFFF 00:09:52.760 Flush: Supported 00:09:52.760 Reservation: Not Supported 00:09:52.760 Namespace Sharing Capabilities: Private 00:09:52.760 Size (in LBAs): 1310720 (5GiB) 00:09:52.760 Capacity (in LBAs): 1310720 (5GiB) 00:09:52.760 Utilization (in LBAs): 1310720 (5GiB) 00:09:52.760 Thin Provisioning: Not Supported 00:09:52.760 Per-NS Atomic Units: No 00:09:52.760 Maximum Single Source Range Length: 128 00:09:52.760 Maximum Copy Length: 128 00:09:52.760 Maximum Source Range Count: 128 00:09:52.760 NGUID/EUI64 Never Reused: No 00:09:52.760 Namespace Write Protected: No 00:09:52.760 Number of LBA Formats: 8 00:09:52.760 Current LBA Format: LBA Format #04 00:09:52.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:52.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:52.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:52.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:52.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:52.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:52.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:52.760 00:09:52.760 17:56:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:52.760 17:56:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:53.019 ===================================================== 00:09:53.019 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:53.019 ===================================================== 00:09:53.019 Controller Capabilities/Features 00:09:53.019 ================================ 00:09:53.019 Vendor ID: 1b36 00:09:53.019 Subsystem Vendor ID: 1af4 00:09:53.019 Serial Number: 12342 00:09:53.019 Model Number: QEMU NVMe Ctrl 00:09:53.019 Firmware Version: 8.0.0 00:09:53.019 Recommended Arb Burst: 6 00:09:53.019 IEEE OUI Identifier: 00 54 52 00:09:53.019 Multi-path I/O 00:09:53.019 May have multiple subsystem ports: No 00:09:53.019 May have multiple controllers: No 00:09:53.019 Associated with SR-IOV VF: No 00:09:53.019 Max Data Transfer Size: 524288 00:09:53.019 Max Number of Namespaces: 256 00:09:53.019 Max Number of I/O Queues: 64 00:09:53.019 NVMe Specification Version (VS): 1.4 00:09:53.019 NVMe Specification Version (Identify): 1.4 00:09:53.019 Maximum Queue Entries: 2048 00:09:53.019 Contiguous Queues Required: Yes 00:09:53.019 Arbitration Mechanisms Supported 00:09:53.019 Weighted Round Robin: Not Supported 00:09:53.019 Vendor Specific: Not Supported 00:09:53.019 Reset Timeout: 7500 ms 00:09:53.019 Doorbell Stride: 4 bytes 00:09:53.019 NVM Subsystem Reset: Not Supported 00:09:53.019 Command Sets Supported 00:09:53.019 NVM Command Set: Supported 00:09:53.019 Boot Partition: Not Supported 00:09:53.019 Memory Page Size Minimum: 4096 bytes 00:09:53.019 Memory Page Size Maximum: 65536 bytes 00:09:53.019 Persistent Memory Region: Not Supported 00:09:53.019 Optional Asynchronous Events Supported 00:09:53.019 Namespace Attribute Notices: Supported 00:09:53.019 Firmware Activation Notices: Not Supported 00:09:53.019 ANA Change Notices: Not Supported 00:09:53.019 PLE Aggregate Log Change Notices: Not Supported 00:09:53.019 LBA Status Info Alert Notices: Not Supported 00:09:53.019 EGE Aggregate Log Change Notices: Not Supported 00:09:53.019 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.019 Zone Descriptor Change Notices: Not Supported 00:09:53.019 Discovery Log Change Notices: Not Supported 00:09:53.019 Controller Attributes 00:09:53.019 128-bit Host Identifier: Not Supported 00:09:53.019 Non-Operational Permissive Mode: Not Supported 00:09:53.019 NVM Sets: Not Supported 00:09:53.019 Read Recovery Levels: Not Supported 00:09:53.019 Endurance Groups: Not Supported 00:09:53.019 Predictable Latency Mode: Not Supported 00:09:53.019 Traffic Based Keep ALive: Not Supported 00:09:53.019 Namespace Granularity: Not Supported 00:09:53.019 SQ Associations: Not Supported 00:09:53.019 UUID List: Not Supported 00:09:53.019 Multi-Domain Subsystem: Not Supported 00:09:53.019 Fixed Capacity Management: Not Supported 00:09:53.019 Variable Capacity Management: Not Supported 00:09:53.019 Delete Endurance Group: Not Supported 00:09:53.019 Delete NVM Set: Not Supported 00:09:53.019 Extended LBA Formats Supported: Supported 00:09:53.019 Flexible Data Placement Supported: Not Supported 00:09:53.019 00:09:53.019 Controller Memory Buffer Support 00:09:53.019 ================================ 00:09:53.019 Supported: No 00:09:53.019 00:09:53.019 Persistent Memory Region Support 00:09:53.019 ================================ 00:09:53.019 Supported: No 00:09:53.019 00:09:53.019 Admin Command Set Attributes 00:09:53.019 ============================ 00:09:53.019 Security Send/Receive: Not Supported 00:09:53.019 Format NVM: Supported 00:09:53.019 Firmware Activate/Download: Not Supported 00:09:53.019 Namespace Management: Supported 00:09:53.019 Device Self-Test: Not Supported 00:09:53.019 Directives: Supported 00:09:53.019 NVMe-MI: Not Supported 00:09:53.019 Virtualization Management: Not Supported 00:09:53.019 Doorbell Buffer Config: Supported 00:09:53.019 Get LBA Status Capability: Not Supported 00:09:53.019 Command & Feature Lockdown Capability: Not Supported 00:09:53.019 Abort Command Limit: 4 00:09:53.019 Async Event Request Limit: 4 00:09:53.019 Number of Firmware Slots: N/A 00:09:53.019 Firmware Slot 1 Read-Only: N/A 00:09:53.019 Firmware Activation Without Reset: N/A 00:09:53.019 Multiple Update Detection Support: N/A 00:09:53.019 Firmware Update Granularity: No Information Provided 00:09:53.019 Per-Namespace SMART Log: Yes 00:09:53.019 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.019 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:53.019 Command Effects Log Page: Supported 00:09:53.019 Get Log Page Extended Data: Supported 00:09:53.019 Telemetry Log Pages: Not Supported 00:09:53.019 Persistent Event Log Pages: Not Supported 00:09:53.019 Supported Log Pages Log Page: May Support 00:09:53.019 Commands Supported & Effects Log Page: Not Supported 00:09:53.019 Feature Identifiers & Effects Log Page:May Support 00:09:53.019 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.019 Data Area 4 for Telemetry Log: Not Supported 00:09:53.019 Error Log Page Entries Supported: 1 00:09:53.019 Keep Alive: Not Supported 00:09:53.019 00:09:53.019 NVM Command Set Attributes 00:09:53.019 ========================== 00:09:53.019 Submission Queue Entry Size 00:09:53.019 Max: 64 00:09:53.019 Min: 64 00:09:53.019 Completion Queue Entry Size 00:09:53.019 Max: 16 00:09:53.019 Min: 16 00:09:53.019 Number of Namespaces: 256 00:09:53.019 Compare Command: Supported 00:09:53.019 Write Uncorrectable Command: Not Supported 00:09:53.019 Dataset Management Command: Supported 00:09:53.019 Write Zeroes Command: Supported 00:09:53.019 Set Features Save Field: Supported 00:09:53.019 Reservations: Not Supported 00:09:53.019 Timestamp: Supported 00:09:53.019 Copy: Supported 00:09:53.019 Volatile Write Cache: Present 00:09:53.019 Atomic Write Unit (Normal): 1 00:09:53.019 Atomic Write Unit (PFail): 1 00:09:53.019 Atomic Compare & Write Unit: 1 00:09:53.019 Fused Compare & Write: Not Supported 00:09:53.019 Scatter-Gather List 00:09:53.019 SGL Command Set: Supported 00:09:53.019 SGL Keyed: Not Supported 00:09:53.019 SGL Bit Bucket Descriptor: Not Supported 00:09:53.019 SGL Metadata Pointer: Not Supported 00:09:53.019 Oversized SGL: Not Supported 00:09:53.019 SGL Metadata Address: Not Supported 00:09:53.019 SGL Offset: Not Supported 00:09:53.019 Transport SGL Data Block: Not Supported 00:09:53.019 Replay Protected Memory Block: Not Supported 00:09:53.019 00:09:53.019 Firmware Slot Information 00:09:53.019 ========================= 00:09:53.019 Active slot: 1 00:09:53.019 Slot 1 Firmware Revision: 1.0 00:09:53.019 00:09:53.019 00:09:53.019 Commands Supported and Effects 00:09:53.019 ============================== 00:09:53.019 Admin Commands 00:09:53.019 -------------- 00:09:53.019 Delete I/O Submission Queue (00h): Supported 00:09:53.019 Create I/O Submission Queue (01h): Supported 00:09:53.020 Get Log Page (02h): Supported 00:09:53.020 Delete I/O Completion Queue (04h): Supported 00:09:53.020 Create I/O Completion Queue (05h): Supported 00:09:53.020 Identify (06h): Supported 00:09:53.020 Abort (08h): Supported 00:09:53.020 Set Features (09h): Supported 00:09:53.020 Get Features (0Ah): Supported 00:09:53.020 Asynchronous Event Request (0Ch): Supported 00:09:53.020 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.020 Directive Send (19h): Supported 00:09:53.020 Directive Receive (1Ah): Supported 00:09:53.020 Virtualization Management (1Ch): Supported 00:09:53.020 Doorbell Buffer Config (7Ch): Supported 00:09:53.020 Format NVM (80h): Supported LBA-Change 00:09:53.020 I/O Commands 00:09:53.020 ------------ 00:09:53.020 Flush (00h): Supported LBA-Change 00:09:53.020 Write (01h): Supported LBA-Change 00:09:53.020 Read (02h): Supported 00:09:53.020 Compare (05h): Supported 00:09:53.020 Write Zeroes (08h): Supported LBA-Change 00:09:53.020 Dataset Management (09h): Supported LBA-Change 00:09:53.020 Unknown (0Ch): Supported 00:09:53.020 Unknown (12h): Supported 00:09:53.020 Copy (19h): Supported LBA-Change 00:09:53.020 Unknown (1Dh): Supported LBA-Change 00:09:53.020 00:09:53.020 Error Log 00:09:53.020 ========= 00:09:53.020 00:09:53.020 Arbitration 00:09:53.020 =========== 00:09:53.020 Arbitration Burst: no limit 00:09:53.020 00:09:53.020 Power Management 00:09:53.020 ================ 00:09:53.020 Number of Power States: 1 00:09:53.020 Current Power State: Power State #0 00:09:53.020 Power State #0: 00:09:53.020 Max Power: 25.00 W 00:09:53.020 Non-Operational State: Operational 00:09:53.020 Entry Latency: 16 microseconds 00:09:53.020 Exit Latency: 4 microseconds 00:09:53.020 Relative Read Throughput: 0 00:09:53.020 Relative Read Latency: 0 00:09:53.020 Relative Write Throughput: 0 00:09:53.020 Relative Write Latency: 0 00:09:53.020 Idle Power: Not Reported 00:09:53.020 Active Power: Not Reported 00:09:53.020 Non-Operational Permissive Mode: Not Supported 00:09:53.020 00:09:53.020 Health Information 00:09:53.020 ================== 00:09:53.020 Critical Warnings: 00:09:53.020 Available Spare Space: OK 00:09:53.020 Temperature: OK 00:09:53.020 Device Reliability: OK 00:09:53.020 Read Only: No 00:09:53.020 Volatile Memory Backup: OK 00:09:53.020 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.020 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.020 Available Spare: 0% 00:09:53.020 Available Spare Threshold: 0% 00:09:53.020 Life Percentage Used: 0% 00:09:53.020 Data Units Read: 4288 00:09:53.020 Data Units Written: 1984 00:09:53.020 Host Read Commands: 179976 00:09:53.020 Host Write Commands: 88205 00:09:53.020 Controller Busy Time: 0 minutes 00:09:53.020 Power Cycles: 0 00:09:53.020 Power On Hours: 0 hours 00:09:53.020 Unsafe Shutdowns: 0 00:09:53.020 Unrecoverable Media Errors: 0 00:09:53.020 Lifetime Error Log Entries: 0 00:09:53.020 Warning Temperature Time: 0 minutes 00:09:53.020 Critical Temperature Time: 0 minutes 00:09:53.020 00:09:53.020 Number of Queues 00:09:53.020 ================ 00:09:53.020 Number of I/O Submission Queues: 64 00:09:53.020 Number of I/O Completion Queues: 64 00:09:53.020 00:09:53.020 ZNS Specific Controller Data 00:09:53.020 ============================ 00:09:53.020 Zone Append Size Limit: 0 00:09:53.020 00:09:53.020 00:09:53.020 Active Namespaces 00:09:53.020 ================= 00:09:53.020 Namespace ID:1 00:09:53.020 Error Recovery Timeout: Unlimited 00:09:53.020 Command Set Identifier: NVM (00h) 00:09:53.020 Deallocate: Supported 00:09:53.020 Deallocated/Unwritten Error: Supported 00:09:53.020 Deallocated Read Value: All 0x00 00:09:53.020 Deallocate in Write Zeroes: Not Supported 00:09:53.020 Deallocated Guard Field: 0xFFFF 00:09:53.020 Flush: Supported 00:09:53.020 Reservation: Not Supported 00:09:53.020 Namespace Sharing Capabilities: Private 00:09:53.020 Size (in LBAs): 1048576 (4GiB) 00:09:53.020 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.020 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.020 Thin Provisioning: Not Supported 00:09:53.020 Per-NS Atomic Units: No 00:09:53.020 Maximum Single Source Range Length: 128 00:09:53.020 Maximum Copy Length: 128 00:09:53.020 Maximum Source Range Count: 128 00:09:53.020 NGUID/EUI64 Never Reused: No 00:09:53.020 Namespace Write Protected: No 00:09:53.020 Number of LBA Formats: 8 00:09:53.020 Current LBA Format: LBA Format #04 00:09:53.020 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.020 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.020 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.020 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.020 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.020 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.020 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.020 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.020 00:09:53.020 Namespace ID:2 00:09:53.020 Error Recovery Timeout: Unlimited 00:09:53.020 Command Set Identifier: NVM (00h) 00:09:53.020 Deallocate: Supported 00:09:53.020 Deallocated/Unwritten Error: Supported 00:09:53.020 Deallocated Read Value: All 0x00 00:09:53.020 Deallocate in Write Zeroes: Not Supported 00:09:53.020 Deallocated Guard Field: 0xFFFF 00:09:53.020 Flush: Supported 00:09:53.020 Reservation: Not Supported 00:09:53.020 Namespace Sharing Capabilities: Private 00:09:53.020 Size (in LBAs): 1048576 (4GiB) 00:09:53.020 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.020 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.020 Thin Provisioning: Not Supported 00:09:53.020 Per-NS Atomic Units: No 00:09:53.020 Maximum Single Source Range Length: 128 00:09:53.020 Maximum Copy Length: 128 00:09:53.020 Maximum Source Range Count: 128 00:09:53.020 NGUID/EUI64 Never Reused: No 00:09:53.020 Namespace Write Protected: No 00:09:53.020 Number of LBA Formats: 8 00:09:53.020 Current LBA Format: LBA Format #04 00:09:53.020 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.020 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.020 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.020 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.020 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.020 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.020 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.020 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.020 00:09:53.020 Namespace ID:3 00:09:53.020 Error Recovery Timeout: Unlimited 00:09:53.020 Command Set Identifier: NVM (00h) 00:09:53.020 Deallocate: Supported 00:09:53.020 Deallocated/Unwritten Error: Supported 00:09:53.020 Deallocated Read Value: All 0x00 00:09:53.020 Deallocate in Write Zeroes: Not Supported 00:09:53.020 Deallocated Guard Field: 0xFFFF 00:09:53.020 Flush: Supported 00:09:53.020 Reservation: Not Supported 00:09:53.020 Namespace Sharing Capabilities: Private 00:09:53.020 Size (in LBAs): 1048576 (4GiB) 00:09:53.020 Capacity (in LBAs): 1048576 (4GiB) 00:09:53.020 Utilization (in LBAs): 1048576 (4GiB) 00:09:53.020 Thin Provisioning: Not Supported 00:09:53.020 Per-NS Atomic Units: No 00:09:53.020 Maximum Single Source Range Length: 128 00:09:53.020 Maximum Copy Length: 128 00:09:53.020 Maximum Source Range Count: 128 00:09:53.020 NGUID/EUI64 Never Reused: No 00:09:53.020 Namespace Write Protected: No 00:09:53.020 Number of LBA Formats: 8 00:09:53.020 Current LBA Format: LBA Format #04 00:09:53.020 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.020 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.020 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.020 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.020 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.020 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.020 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.020 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.020 00:09:53.020 17:56:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:53.020 17:56:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:53.280 ===================================================== 00:09:53.280 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:53.280 ===================================================== 00:09:53.280 Controller Capabilities/Features 00:09:53.280 ================================ 00:09:53.280 Vendor ID: 1b36 00:09:53.280 Subsystem Vendor ID: 1af4 00:09:53.280 Serial Number: 12343 00:09:53.280 Model Number: QEMU NVMe Ctrl 00:09:53.280 Firmware Version: 8.0.0 00:09:53.280 Recommended Arb Burst: 6 00:09:53.280 IEEE OUI Identifier: 00 54 52 00:09:53.280 Multi-path I/O 00:09:53.280 May have multiple subsystem ports: No 00:09:53.280 May have multiple controllers: Yes 00:09:53.280 Associated with SR-IOV VF: No 00:09:53.280 Max Data Transfer Size: 524288 00:09:53.280 Max Number of Namespaces: 256 00:09:53.280 Max Number of I/O Queues: 64 00:09:53.280 NVMe Specification Version (VS): 1.4 00:09:53.280 NVMe Specification Version (Identify): 1.4 00:09:53.280 Maximum Queue Entries: 2048 00:09:53.280 Contiguous Queues Required: Yes 00:09:53.280 Arbitration Mechanisms Supported 00:09:53.280 Weighted Round Robin: Not Supported 00:09:53.280 Vendor Specific: Not Supported 00:09:53.280 Reset Timeout: 7500 ms 00:09:53.280 Doorbell Stride: 4 bytes 00:09:53.280 NVM Subsystem Reset: Not Supported 00:09:53.280 Command Sets Supported 00:09:53.280 NVM Command Set: Supported 00:09:53.280 Boot Partition: Not Supported 00:09:53.280 Memory Page Size Minimum: 4096 bytes 00:09:53.280 Memory Page Size Maximum: 65536 bytes 00:09:53.280 Persistent Memory Region: Not Supported 00:09:53.280 Optional Asynchronous Events Supported 00:09:53.280 Namespace Attribute Notices: Supported 00:09:53.280 Firmware Activation Notices: Not Supported 00:09:53.280 ANA Change Notices: Not Supported 00:09:53.280 PLE Aggregate Log Change Notices: Not Supported 00:09:53.280 LBA Status Info Alert Notices: Not Supported 00:09:53.280 EGE Aggregate Log Change Notices: Not Supported 00:09:53.280 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.280 Zone Descriptor Change Notices: Not Supported 00:09:53.280 Discovery Log Change Notices: Not Supported 00:09:53.280 Controller Attributes 00:09:53.280 128-bit Host Identifier: Not Supported 00:09:53.280 Non-Operational Permissive Mode: Not Supported 00:09:53.280 NVM Sets: Not Supported 00:09:53.280 Read Recovery Levels: Not Supported 00:09:53.280 Endurance Groups: Supported 00:09:53.280 Predictable Latency Mode: Not Supported 00:09:53.280 Traffic Based Keep ALive: Not Supported 00:09:53.280 Namespace Granularity: Not Supported 00:09:53.280 SQ Associations: Not Supported 00:09:53.280 UUID List: Not Supported 00:09:53.280 Multi-Domain Subsystem: Not Supported 00:09:53.280 Fixed Capacity Management: Not Supported 00:09:53.280 Variable Capacity Management: Not Supported 00:09:53.280 Delete Endurance Group: Not Supported 00:09:53.280 Delete NVM Set: Not Supported 00:09:53.280 Extended LBA Formats Supported: Supported 00:09:53.280 Flexible Data Placement Supported: Supported 00:09:53.280 00:09:53.280 Controller Memory Buffer Support 00:09:53.280 ================================ 00:09:53.280 Supported: No 00:09:53.280 00:09:53.280 Persistent Memory Region Support 00:09:53.280 ================================ 00:09:53.280 Supported: No 00:09:53.280 00:09:53.280 Admin Command Set Attributes 00:09:53.280 ============================ 00:09:53.280 Security Send/Receive: Not Supported 00:09:53.280 Format NVM: Supported 00:09:53.280 Firmware Activate/Download: Not Supported 00:09:53.280 Namespace Management: Supported 00:09:53.280 Device Self-Test: Not Supported 00:09:53.280 Directives: Supported 00:09:53.280 NVMe-MI: Not Supported 00:09:53.280 Virtualization Management: Not Supported 00:09:53.280 Doorbell Buffer Config: Supported 00:09:53.280 Get LBA Status Capability: Not Supported 00:09:53.280 Command & Feature Lockdown Capability: Not Supported 00:09:53.280 Abort Command Limit: 4 00:09:53.280 Async Event Request Limit: 4 00:09:53.280 Number of Firmware Slots: N/A 00:09:53.280 Firmware Slot 1 Read-Only: N/A 00:09:53.280 Firmware Activation Without Reset: N/A 00:09:53.280 Multiple Update Detection Support: N/A 00:09:53.280 Firmware Update Granularity: No Information Provided 00:09:53.281 Per-Namespace SMART Log: Yes 00:09:53.281 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.281 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:53.281 Command Effects Log Page: Supported 00:09:53.281 Get Log Page Extended Data: Supported 00:09:53.281 Telemetry Log Pages: Not Supported 00:09:53.281 Persistent Event Log Pages: Not Supported 00:09:53.281 Supported Log Pages Log Page: May Support 00:09:53.281 Commands Supported & Effects Log Page: Not Supported 00:09:53.281 Feature Identifiers & Effects Log Page:May Support 00:09:53.281 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.281 Data Area 4 for Telemetry Log: Not Supported 00:09:53.281 Error Log Page Entries Supported: 1 00:09:53.281 Keep Alive: Not Supported 00:09:53.281 00:09:53.281 NVM Command Set Attributes 00:09:53.281 ========================== 00:09:53.281 Submission Queue Entry Size 00:09:53.281 Max: 64 00:09:53.281 Min: 64 00:09:53.281 Completion Queue Entry Size 00:09:53.281 Max: 16 00:09:53.281 Min: 16 00:09:53.281 Number of Namespaces: 256 00:09:53.281 Compare Command: Supported 00:09:53.281 Write Uncorrectable Command: Not Supported 00:09:53.281 Dataset Management Command: Supported 00:09:53.281 Write Zeroes Command: Supported 00:09:53.281 Set Features Save Field: Supported 00:09:53.281 Reservations: Not Supported 00:09:53.281 Timestamp: Supported 00:09:53.281 Copy: Supported 00:09:53.281 Volatile Write Cache: Present 00:09:53.281 Atomic Write Unit (Normal): 1 00:09:53.281 Atomic Write Unit (PFail): 1 00:09:53.281 Atomic Compare & Write Unit: 1 00:09:53.281 Fused Compare & Write: Not Supported 00:09:53.281 Scatter-Gather List 00:09:53.281 SGL Command Set: Supported 00:09:53.281 SGL Keyed: Not Supported 00:09:53.281 SGL Bit Bucket Descriptor: Not Supported 00:09:53.281 SGL Metadata Pointer: Not Supported 00:09:53.281 Oversized SGL: Not Supported 00:09:53.281 SGL Metadata Address: Not Supported 00:09:53.281 SGL Offset: Not Supported 00:09:53.281 Transport SGL Data Block: Not Supported 00:09:53.281 Replay Protected Memory Block: Not Supported 00:09:53.281 00:09:53.281 Firmware Slot Information 00:09:53.281 ========================= 00:09:53.281 Active slot: 1 00:09:53.281 Slot 1 Firmware Revision: 1.0 00:09:53.281 00:09:53.281 00:09:53.281 Commands Supported and Effects 00:09:53.281 ============================== 00:09:53.281 Admin Commands 00:09:53.281 -------------- 00:09:53.281 Delete I/O Submission Queue (00h): Supported 00:09:53.281 Create I/O Submission Queue (01h): Supported 00:09:53.281 Get Log Page (02h): Supported 00:09:53.281 Delete I/O Completion Queue (04h): Supported 00:09:53.281 Create I/O Completion Queue (05h): Supported 00:09:53.281 Identify (06h): Supported 00:09:53.281 Abort (08h): Supported 00:09:53.281 Set Features (09h): Supported 00:09:53.281 Get Features (0Ah): Supported 00:09:53.281 Asynchronous Event Request (0Ch): Supported 00:09:53.281 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:53.281 Directive Send (19h): Supported 00:09:53.281 Directive Receive (1Ah): Supported 00:09:53.281 Virtualization Management (1Ch): Supported 00:09:53.281 Doorbell Buffer Config (7Ch): Supported 00:09:53.281 Format NVM (80h): Supported LBA-Change 00:09:53.281 I/O Commands 00:09:53.281 ------------ 00:09:53.281 Flush (00h): Supported LBA-Change 00:09:53.281 Write (01h): Supported LBA-Change 00:09:53.281 Read (02h): Supported 00:09:53.281 Compare (05h): Supported 00:09:53.281 Write Zeroes (08h): Supported LBA-Change 00:09:53.281 Dataset Management (09h): Supported LBA-Change 00:09:53.281 Unknown (0Ch): Supported 00:09:53.281 Unknown (12h): Supported 00:09:53.281 Copy (19h): Supported LBA-Change 00:09:53.281 Unknown (1Dh): Supported LBA-Change 00:09:53.281 00:09:53.281 Error Log 00:09:53.281 ========= 00:09:53.281 00:09:53.281 Arbitration 00:09:53.281 =========== 00:09:53.281 Arbitration Burst: no limit 00:09:53.281 00:09:53.281 Power Management 00:09:53.281 ================ 00:09:53.281 Number of Power States: 1 00:09:53.281 Current Power State: Power State #0 00:09:53.281 Power State #0: 00:09:53.281 Max Power: 25.00 W 00:09:53.281 Non-Operational State: Operational 00:09:53.281 Entry Latency: 16 microseconds 00:09:53.281 Exit Latency: 4 microseconds 00:09:53.281 Relative Read Throughput: 0 00:09:53.281 Relative Read Latency: 0 00:09:53.281 Relative Write Throughput: 0 00:09:53.281 Relative Write Latency: 0 00:09:53.281 Idle Power: Not Reported 00:09:53.281 Active Power: Not Reported 00:09:53.281 Non-Operational Permissive Mode: Not Supported 00:09:53.281 00:09:53.281 Health Information 00:09:53.281 ================== 00:09:53.281 Critical Warnings: 00:09:53.281 Available Spare Space: OK 00:09:53.281 Temperature: OK 00:09:53.281 Device Reliability: OK 00:09:53.281 Read Only: No 00:09:53.281 Volatile Memory Backup: OK 00:09:53.281 Current Temperature: 323 Kelvin (50 Celsius) 00:09:53.281 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:53.281 Available Spare: 0% 00:09:53.281 Available Spare Threshold: 0% 00:09:53.281 Life Percentage Used: 0% 00:09:53.281 Data Units Read: 1498 00:09:53.281 Data Units Written: 698 00:09:53.281 Host Read Commands: 60434 00:09:53.281 Host Write Commands: 29702 00:09:53.281 Controller Busy Time: 0 minutes 00:09:53.281 Power Cycles: 0 00:09:53.281 Power On Hours: 0 hours 00:09:53.281 Unsafe Shutdowns: 0 00:09:53.281 Unrecoverable Media Errors: 0 00:09:53.281 Lifetime Error Log Entries: 0 00:09:53.281 Warning Temperature Time: 0 minutes 00:09:53.281 Critical Temperature Time: 0 minutes 00:09:53.281 00:09:53.281 Number of Queues 00:09:53.281 ================ 00:09:53.281 Number of I/O Submission Queues: 64 00:09:53.281 Number of I/O Completion Queues: 64 00:09:53.281 00:09:53.281 ZNS Specific Controller Data 00:09:53.281 ============================ 00:09:53.281 Zone Append Size Limit: 0 00:09:53.281 00:09:53.281 00:09:53.281 Active Namespaces 00:09:53.281 ================= 00:09:53.281 Namespace ID:1 00:09:53.281 Error Recovery Timeout: Unlimited 00:09:53.281 Command Set Identifier: NVM (00h) 00:09:53.281 Deallocate: Supported 00:09:53.281 Deallocated/Unwritten Error: Supported 00:09:53.281 Deallocated Read Value: All 0x00 00:09:53.281 Deallocate in Write Zeroes: Not Supported 00:09:53.281 Deallocated Guard Field: 0xFFFF 00:09:53.281 Flush: Supported 00:09:53.281 Reservation: Not Supported 00:09:53.281 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.281 Size (in LBAs): 262144 (1GiB) 00:09:53.281 Capacity (in LBAs): 262144 (1GiB) 00:09:53.281 Utilization (in LBAs): 262144 (1GiB) 00:09:53.281 Thin Provisioning: Not Supported 00:09:53.281 Per-NS Atomic Units: No 00:09:53.281 Maximum Single Source Range Length: 128 00:09:53.281 Maximum Copy Length: 128 00:09:53.281 Maximum Source Range Count: 128 00:09:53.281 NGUID/EUI64 Never Reused: No 00:09:53.281 Namespace Write Protected: No 00:09:53.281 Endurance group ID: 1 00:09:53.281 Number of LBA Formats: 8 00:09:53.281 Current LBA Format: LBA Format #04 00:09:53.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.281 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:53.281 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:53.281 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:53.281 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:53.281 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:53.281 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:53.281 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:53.281 00:09:53.281 Get Feature FDP: 00:09:53.281 ================ 00:09:53.281 Enabled: Yes 00:09:53.281 FDP configuration index: 0 00:09:53.281 00:09:53.281 FDP configurations log page 00:09:53.281 =========================== 00:09:53.281 Number of FDP configurations: 1 00:09:53.281 Version: 0 00:09:53.281 Size: 112 00:09:53.281 FDP Configuration Descriptor: 0 00:09:53.281 Descriptor Size: 96 00:09:53.281 Reclaim Group Identifier format: 2 00:09:53.281 FDP Volatile Write Cache: Not Present 00:09:53.281 FDP Configuration: Valid 00:09:53.281 Vendor Specific Size: 0 00:09:53.281 Number of Reclaim Groups: 2 00:09:53.281 Number of Recalim Unit Handles: 8 00:09:53.281 Max Placement Identifiers: 128 00:09:53.281 Number of Namespaces Suppprted: 256 00:09:53.281 Reclaim unit Nominal Size: 6000000 bytes 00:09:53.281 Estimated Reclaim Unit Time Limit: Not Reported 00:09:53.281 RUH Desc #000: RUH Type: Initially Isolated 00:09:53.281 RUH Desc #001: RUH Type: Initially Isolated 00:09:53.281 RUH Desc #002: RUH Type: Initially Isolated 00:09:53.281 RUH Desc #003: RUH Type: Initially Isolated 00:09:53.281 RUH Desc #004: RUH Type: Initially Isolated 00:09:53.282 RUH Desc #005: RUH Type: Initially Isolated 00:09:53.282 RUH Desc #006: RUH Type: Initially Isolated 00:09:53.282 RUH Desc #007: RUH Type: Initially Isolated 00:09:53.282 00:09:53.282 FDP reclaim unit handle usage log page 00:09:53.282 ====================================== 00:09:53.282 Number of Reclaim Unit Handles: 8 00:09:53.282 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:53.282 RUH Usage Desc #001: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #002: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #003: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #004: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #005: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #006: RUH Attributes: Unused 00:09:53.282 RUH Usage Desc #007: RUH Attributes: Unused 00:09:53.282 00:09:53.282 FDP statistics log page 00:09:53.282 ======================= 00:09:53.282 Host bytes with metadata written: 455462912 00:09:53.282 Media bytes with metadata written: 455516160 00:09:53.282 Media bytes erased: 0 00:09:53.282 00:09:53.282 FDP events log page 00:09:53.282 =================== 00:09:53.282 Number of FDP events: 0 00:09:53.282 00:09:53.282 00:09:53.282 real 0m1.497s 00:09:53.282 user 0m0.511s 00:09:53.282 sys 0m0.773s 00:09:53.282 17:56:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.282 ************************************ 00:09:53.282 END TEST nvme_identify 00:09:53.282 ************************************ 00:09:53.282 17:56:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.282 17:56:10 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:53.282 17:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.282 17:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.282 17:56:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.282 ************************************ 00:09:53.282 START TEST nvme_perf 00:09:53.282 ************************************ 00:09:53.282 17:56:10 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:53.282 17:56:10 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:54.661 Initializing NVMe Controllers 00:09:54.661 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:54.661 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:54.661 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:54.661 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:54.661 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:54.661 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:54.662 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:54.662 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:54.662 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:54.662 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:54.662 Initialization complete. Launching workers. 00:09:54.662 ======================================================== 00:09:54.662 Latency(us) 00:09:54.662 Device Information : IOPS MiB/s Average min max 00:09:54.662 PCIE (0000:00:09.0) NSID 1 from core 0: 13816.22 161.91 9261.25 7561.33 29381.94 00:09:54.662 PCIE (0000:00:06.0) NSID 1 from core 0: 13816.22 161.91 9260.34 7429.07 30287.22 00:09:54.662 PCIE (0000:00:07.0) NSID 1 from core 0: 13816.22 161.91 9256.12 6383.68 30825.55 00:09:54.662 PCIE (0000:00:08.0) NSID 1 from core 0: 13816.22 161.91 9250.77 5631.80 32801.87 00:09:54.662 PCIE (0000:00:08.0) NSID 2 from core 0: 13816.22 161.91 9245.37 4905.69 33143.12 00:09:54.662 PCIE (0000:00:08.0) NSID 3 from core 0: 13944.15 163.41 9155.07 4194.15 22470.26 00:09:54.662 ======================================================== 00:09:54.662 Total : 83025.26 972.95 9238.03 4194.15 33143.12 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7737.986us 00:09:54.662 10.00000% : 8001.182us 00:09:54.662 25.00000% : 8369.658us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9422.445us 00:09:54.662 90.00000% : 10317.314us 00:09:54.662 95.00000% : 11317.462us 00:09:54.662 98.00000% : 15475.971us 00:09:54.662 99.00000% : 16844.594us 00:09:54.662 99.50000% : 28425.253us 00:09:54.662 99.90000% : 29267.483us 00:09:54.662 99.99000% : 29478.040us 00:09:54.662 99.99900% : 29478.040us 00:09:54.662 99.99990% : 29478.040us 00:09:54.662 99.99999% : 29478.040us 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7580.067us 00:09:54.662 10.00000% : 7895.904us 00:09:54.662 25.00000% : 8317.018us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9475.084us 00:09:54.662 90.00000% : 10317.314us 00:09:54.662 95.00000% : 11264.822us 00:09:54.662 98.00000% : 16002.365us 00:09:54.662 99.00000% : 17792.103us 00:09:54.662 99.50000% : 29267.483us 00:09:54.662 99.90000% : 30109.712us 00:09:54.662 99.99000% : 30320.270us 00:09:54.662 99.99900% : 30320.270us 00:09:54.662 99.99990% : 30320.270us 00:09:54.662 99.99999% : 30320.270us 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7737.986us 00:09:54.662 10.00000% : 8001.182us 00:09:54.662 25.00000% : 8369.658us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9369.806us 00:09:54.662 90.00000% : 10159.396us 00:09:54.662 95.00000% : 11212.183us 00:09:54.662 98.00000% : 16002.365us 00:09:54.662 99.00000% : 18529.054us 00:09:54.662 99.50000% : 29688.598us 00:09:54.662 99.90000% : 30741.385us 00:09:54.662 99.99000% : 30951.942us 00:09:54.662 99.99900% : 30951.942us 00:09:54.662 99.99990% : 30951.942us 00:09:54.662 99.99999% : 30951.942us 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7632.707us 00:09:54.662 10.00000% : 8001.182us 00:09:54.662 25.00000% : 8369.658us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9369.806us 00:09:54.662 90.00000% : 10212.035us 00:09:54.662 95.00000% : 11370.101us 00:09:54.662 98.00000% : 15054.856us 00:09:54.662 99.00000% : 17370.988us 00:09:54.662 99.50000% : 31794.172us 00:09:54.662 99.90000% : 32636.402us 00:09:54.662 99.99000% : 32846.959us 00:09:54.662 99.99900% : 32846.959us 00:09:54.662 99.99990% : 32846.959us 00:09:54.662 99.99999% : 32846.959us 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7632.707us 00:09:54.662 10.00000% : 8001.182us 00:09:54.662 25.00000% : 8369.658us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9369.806us 00:09:54.662 90.00000% : 10159.396us 00:09:54.662 95.00000% : 11580.659us 00:09:54.662 98.00000% : 14633.741us 00:09:54.662 99.00000% : 16634.037us 00:09:54.662 99.50000% : 32215.287us 00:09:54.662 99.90000% : 33057.516us 00:09:54.662 99.99000% : 33268.074us 00:09:54.662 99.99900% : 33268.074us 00:09:54.662 99.99990% : 33268.074us 00:09:54.662 99.99999% : 33268.074us 00:09:54.662 00:09:54.662 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:54.662 ================================================================================= 00:09:54.662 1.00000% : 7632.707us 00:09:54.662 10.00000% : 8001.182us 00:09:54.662 25.00000% : 8369.658us 00:09:54.662 50.00000% : 8896.051us 00:09:54.662 75.00000% : 9422.445us 00:09:54.662 90.00000% : 10264.675us 00:09:54.662 95.00000% : 11422.741us 00:09:54.662 98.00000% : 14633.741us 00:09:54.662 99.00000% : 15475.971us 00:09:54.662 99.50000% : 21476.858us 00:09:54.662 99.90000% : 22319.088us 00:09:54.662 99.99000% : 22529.645us 00:09:54.662 99.99900% : 22529.645us 00:09:54.662 99.99990% : 22529.645us 00:09:54.662 99.99999% : 22529.645us 00:09:54.662 00:09:54.662 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:54.662 ============================================================================== 00:09:54.662 Range in us Cumulative IO count 00:09:54.662 7527.428 - 7580.067: 0.0145% ( 2) 00:09:54.662 7580.067 - 7632.707: 0.1157% ( 14) 00:09:54.662 7632.707 - 7685.346: 0.4847% ( 51) 00:09:54.662 7685.346 - 7737.986: 1.3093% ( 114) 00:09:54.662 7737.986 - 7790.625: 2.4089% ( 152) 00:09:54.662 7790.625 - 7843.264: 3.9931% ( 219) 00:09:54.662 7843.264 - 7895.904: 5.9606% ( 272) 00:09:54.662 7895.904 - 7948.543: 8.0440% ( 288) 00:09:54.662 7948.543 - 8001.182: 10.2720% ( 308) 00:09:54.662 8001.182 - 8053.822: 12.4638% ( 303) 00:09:54.662 8053.822 - 8106.461: 14.5978% ( 295) 00:09:54.662 8106.461 - 8159.100: 16.8475% ( 311) 00:09:54.662 8159.100 - 8211.740: 19.1117% ( 313) 00:09:54.662 8211.740 - 8264.379: 21.4048% ( 317) 00:09:54.662 8264.379 - 8317.018: 23.7558% ( 325) 00:09:54.662 8317.018 - 8369.658: 26.2153% ( 340) 00:09:54.662 8369.658 - 8422.297: 28.6024% ( 330) 00:09:54.662 8422.297 - 8474.937: 30.9823% ( 329) 00:09:54.662 8474.937 - 8527.576: 33.4274% ( 338) 00:09:54.662 8527.576 - 8580.215: 35.9013% ( 342) 00:09:54.662 8580.215 - 8632.855: 38.4259% ( 349) 00:09:54.662 8632.855 - 8685.494: 40.9144% ( 344) 00:09:54.662 8685.494 - 8738.133: 43.4534% ( 351) 00:09:54.662 8738.133 - 8790.773: 46.0720% ( 362) 00:09:54.662 8790.773 - 8843.412: 48.6834% ( 361) 00:09:54.662 8843.412 - 8896.051: 51.2587% ( 356) 00:09:54.662 8896.051 - 8948.691: 53.9641% ( 374) 00:09:54.662 8948.691 - 9001.330: 56.5321% ( 355) 00:09:54.662 9001.330 - 9053.969: 59.1580% ( 363) 00:09:54.662 9053.969 - 9106.609: 61.8056% ( 366) 00:09:54.662 9106.609 - 9159.248: 64.4531% ( 366) 00:09:54.662 9159.248 - 9211.888: 67.0935% ( 365) 00:09:54.662 9211.888 - 9264.527: 69.7627% ( 369) 00:09:54.662 9264.527 - 9317.166: 72.4103% ( 366) 00:09:54.662 9317.166 - 9369.806: 74.8843% ( 342) 00:09:54.662 9369.806 - 9422.445: 77.0399% ( 298) 00:09:54.662 9422.445 - 9475.084: 78.9135% ( 259) 00:09:54.662 9475.084 - 9527.724: 80.4398% ( 211) 00:09:54.662 9527.724 - 9580.363: 81.7491% ( 181) 00:09:54.662 9580.363 - 9633.002: 82.7329% ( 136) 00:09:54.662 9633.002 - 9685.642: 83.6155% ( 122) 00:09:54.662 9685.642 - 9738.281: 84.4546% ( 116) 00:09:54.662 9738.281 - 9790.920: 85.1924% ( 102) 00:09:54.662 9790.920 - 9843.560: 85.8579% ( 92) 00:09:54.662 9843.560 - 9896.199: 86.4873% ( 87) 00:09:54.662 9896.199 - 9948.839: 87.0732% ( 81) 00:09:54.662 9948.839 - 10001.478: 87.5723% ( 69) 00:09:54.662 10001.478 - 10054.117: 88.0570% ( 67) 00:09:54.662 10054.117 - 10106.757: 88.5489% ( 68) 00:09:54.662 10106.757 - 10159.396: 88.9902% ( 61) 00:09:54.662 10159.396 - 10212.035: 89.4097% ( 58) 00:09:54.662 10212.035 - 10264.675: 89.8438% ( 60) 00:09:54.662 10264.675 - 10317.314: 90.2488% ( 56) 00:09:54.662 10317.314 - 10369.953: 90.6395% ( 54) 00:09:54.662 10369.953 - 10422.593: 90.9722% ( 46) 00:09:54.662 10422.593 - 10475.232: 91.2616% ( 40) 00:09:54.662 10475.232 - 10527.871: 91.5365% ( 38) 00:09:54.662 10527.871 - 10580.511: 91.7896% ( 35) 00:09:54.662 10580.511 - 10633.150: 92.0573% ( 37) 00:09:54.662 10633.150 - 10685.790: 92.3249% ( 37) 00:09:54.662 10685.790 - 10738.429: 92.5709% ( 34) 00:09:54.662 10738.429 - 10791.068: 92.8024% ( 32) 00:09:54.662 10791.068 - 10843.708: 93.0700% ( 37) 00:09:54.662 10843.708 - 10896.347: 93.3594% ( 40) 00:09:54.662 10896.347 - 10948.986: 93.6126% ( 35) 00:09:54.662 10948.986 - 11001.626: 93.8223% ( 29) 00:09:54.662 11001.626 - 11054.265: 94.0466% ( 31) 00:09:54.662 11054.265 - 11106.904: 94.2419% ( 27) 00:09:54.662 11106.904 - 11159.544: 94.4517% ( 29) 00:09:54.662 11159.544 - 11212.183: 94.6398% ( 26) 00:09:54.662 11212.183 - 11264.822: 94.8351% ( 27) 00:09:54.662 11264.822 - 11317.462: 95.0304% ( 27) 00:09:54.662 11317.462 - 11370.101: 95.1534% ( 17) 00:09:54.662 11370.101 - 11422.741: 95.2474% ( 13) 00:09:54.662 11422.741 - 11475.380: 95.3197% ( 10) 00:09:54.663 11475.380 - 11528.019: 95.3704% ( 7) 00:09:54.663 11528.019 - 11580.659: 95.4138% ( 6) 00:09:54.663 11580.659 - 11633.298: 95.4644% ( 7) 00:09:54.663 11633.298 - 11685.937: 95.5078% ( 6) 00:09:54.663 11685.937 - 11738.577: 95.5657% ( 8) 00:09:54.663 11738.577 - 11791.216: 95.6163% ( 7) 00:09:54.663 11791.216 - 11843.855: 95.6597% ( 6) 00:09:54.663 11843.855 - 11896.495: 95.7248% ( 9) 00:09:54.663 11896.495 - 11949.134: 95.7899% ( 9) 00:09:54.663 11949.134 - 12001.773: 95.8406% ( 7) 00:09:54.663 12001.773 - 12054.413: 95.9129% ( 10) 00:09:54.663 12054.413 - 12107.052: 95.9852% ( 10) 00:09:54.663 12107.052 - 12159.692: 96.0359% ( 7) 00:09:54.663 12159.692 - 12212.331: 96.1155% ( 11) 00:09:54.663 12212.331 - 12264.970: 96.1878% ( 10) 00:09:54.663 12264.970 - 12317.610: 96.2457% ( 8) 00:09:54.663 12317.610 - 12370.249: 96.3180% ( 10) 00:09:54.663 12370.249 - 12422.888: 96.3903% ( 10) 00:09:54.663 12422.888 - 12475.528: 96.4265% ( 5) 00:09:54.663 12475.528 - 12528.167: 96.4699% ( 6) 00:09:54.663 12528.167 - 12580.806: 96.5061% ( 5) 00:09:54.663 12580.806 - 12633.446: 96.5422% ( 5) 00:09:54.663 12633.446 - 12686.085: 96.5784% ( 5) 00:09:54.663 12686.085 - 12738.724: 96.5929% ( 2) 00:09:54.663 12738.724 - 12791.364: 96.6146% ( 3) 00:09:54.663 12791.364 - 12844.003: 96.6363% ( 3) 00:09:54.663 12844.003 - 12896.643: 96.6508% ( 2) 00:09:54.663 12896.643 - 12949.282: 96.6725% ( 3) 00:09:54.663 12949.282 - 13001.921: 96.6869% ( 2) 00:09:54.663 13001.921 - 13054.561: 96.7086% ( 3) 00:09:54.663 13054.561 - 13107.200: 96.7231% ( 2) 00:09:54.663 13107.200 - 13159.839: 96.7376% ( 2) 00:09:54.663 13159.839 - 13212.479: 96.7593% ( 3) 00:09:54.663 13212.479 - 13265.118: 96.7810% ( 3) 00:09:54.663 13265.118 - 13317.757: 96.7954% ( 2) 00:09:54.663 13317.757 - 13370.397: 96.8171% ( 3) 00:09:54.663 13370.397 - 13423.036: 96.8388% ( 3) 00:09:54.663 13423.036 - 13475.676: 96.8533% ( 2) 00:09:54.663 13475.676 - 13580.954: 96.8895% ( 5) 00:09:54.663 13580.954 - 13686.233: 96.9329% ( 6) 00:09:54.663 13686.233 - 13791.512: 96.9618% ( 4) 00:09:54.663 13791.512 - 13896.790: 96.9980% ( 5) 00:09:54.663 13896.790 - 14002.069: 97.0414% ( 6) 00:09:54.663 14002.069 - 14107.348: 97.1282% ( 12) 00:09:54.663 14107.348 - 14212.627: 97.2078% ( 11) 00:09:54.663 14212.627 - 14317.905: 97.2801% ( 10) 00:09:54.663 14317.905 - 14423.184: 97.3741% ( 13) 00:09:54.663 14423.184 - 14528.463: 97.4465% ( 10) 00:09:54.663 14528.463 - 14633.741: 97.5043% ( 8) 00:09:54.663 14633.741 - 14739.020: 97.5477% ( 6) 00:09:54.663 14739.020 - 14844.299: 97.5984% ( 7) 00:09:54.663 14844.299 - 14949.578: 97.6418% ( 6) 00:09:54.663 14949.578 - 15054.856: 97.7141% ( 10) 00:09:54.663 15054.856 - 15160.135: 97.7937% ( 11) 00:09:54.663 15160.135 - 15265.414: 97.8877% ( 13) 00:09:54.663 15265.414 - 15370.692: 97.9890% ( 14) 00:09:54.663 15370.692 - 15475.971: 98.0830% ( 13) 00:09:54.663 15475.971 - 15581.250: 98.1771% ( 13) 00:09:54.663 15581.250 - 15686.529: 98.2711% ( 13) 00:09:54.663 15686.529 - 15791.807: 98.3724% ( 14) 00:09:54.663 15791.807 - 15897.086: 98.4664% ( 13) 00:09:54.663 15897.086 - 16002.365: 98.5605% ( 13) 00:09:54.663 16002.365 - 16107.643: 98.6545% ( 13) 00:09:54.663 16107.643 - 16212.922: 98.7196% ( 9) 00:09:54.663 16212.922 - 16318.201: 98.7703% ( 7) 00:09:54.663 16318.201 - 16423.480: 98.8137% ( 6) 00:09:54.663 16423.480 - 16528.758: 98.8715% ( 8) 00:09:54.663 16528.758 - 16634.037: 98.9222% ( 7) 00:09:54.663 16634.037 - 16739.316: 98.9728% ( 7) 00:09:54.663 16739.316 - 16844.594: 99.0234% ( 7) 00:09:54.663 16844.594 - 16949.873: 99.0741% ( 7) 00:09:54.663 27161.908 - 27372.466: 99.0885% ( 2) 00:09:54.663 27372.466 - 27583.023: 99.1898% ( 14) 00:09:54.663 27583.023 - 27793.581: 99.2911% ( 14) 00:09:54.663 27793.581 - 28004.138: 99.3851% ( 13) 00:09:54.663 28004.138 - 28214.696: 99.4864% ( 14) 00:09:54.663 28214.696 - 28425.253: 99.5660% ( 11) 00:09:54.663 28425.253 - 28635.810: 99.6672% ( 14) 00:09:54.663 28635.810 - 28846.368: 99.7613% ( 13) 00:09:54.663 28846.368 - 29056.925: 99.8626% ( 14) 00:09:54.663 29056.925 - 29267.483: 99.9638% ( 14) 00:09:54.663 29267.483 - 29478.040: 100.0000% ( 5) 00:09:54.663 00:09:54.663 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:54.663 ============================================================================== 00:09:54.663 Range in us Cumulative IO count 00:09:54.663 7422.149 - 7474.789: 0.1085% ( 15) 00:09:54.663 7474.789 - 7527.428: 0.2894% ( 25) 00:09:54.663 7527.428 - 7580.067: 1.0055% ( 99) 00:09:54.663 7580.067 - 7632.707: 2.0544% ( 145) 00:09:54.663 7632.707 - 7685.346: 3.3637% ( 181) 00:09:54.663 7685.346 - 7737.986: 5.0998% ( 240) 00:09:54.663 7737.986 - 7790.625: 6.9661% ( 258) 00:09:54.663 7790.625 - 7843.264: 8.8542% ( 261) 00:09:54.663 7843.264 - 7895.904: 10.6481% ( 248) 00:09:54.663 7895.904 - 7948.543: 12.5579% ( 264) 00:09:54.663 7948.543 - 8001.182: 14.5038% ( 269) 00:09:54.663 8001.182 - 8053.822: 16.4641% ( 271) 00:09:54.663 8053.822 - 8106.461: 18.3594% ( 262) 00:09:54.663 8106.461 - 8159.100: 20.3487% ( 275) 00:09:54.663 8159.100 - 8211.740: 22.4248% ( 287) 00:09:54.663 8211.740 - 8264.379: 24.3996% ( 273) 00:09:54.663 8264.379 - 8317.018: 26.4757% ( 287) 00:09:54.663 8317.018 - 8369.658: 28.6458% ( 300) 00:09:54.663 8369.658 - 8422.297: 30.7147% ( 286) 00:09:54.663 8422.297 - 8474.937: 32.8270% ( 292) 00:09:54.663 8474.937 - 8527.576: 35.0839% ( 312) 00:09:54.663 8527.576 - 8580.215: 37.2613% ( 301) 00:09:54.663 8580.215 - 8632.855: 39.4604% ( 304) 00:09:54.663 8632.855 - 8685.494: 41.5871% ( 294) 00:09:54.663 8685.494 - 8738.133: 43.7211% ( 295) 00:09:54.663 8738.133 - 8790.773: 45.8550% ( 295) 00:09:54.663 8790.773 - 8843.412: 48.0903% ( 309) 00:09:54.663 8843.412 - 8896.051: 50.2894% ( 304) 00:09:54.663 8896.051 - 8948.691: 52.5318% ( 310) 00:09:54.663 8948.691 - 9001.330: 54.7888% ( 312) 00:09:54.663 9001.330 - 9053.969: 57.0674% ( 315) 00:09:54.663 9053.969 - 9106.609: 59.3171% ( 311) 00:09:54.663 9106.609 - 9159.248: 61.7188% ( 332) 00:09:54.663 9159.248 - 9211.888: 63.8600% ( 296) 00:09:54.663 9211.888 - 9264.527: 66.2254% ( 327) 00:09:54.663 9264.527 - 9317.166: 68.5041% ( 315) 00:09:54.663 9317.166 - 9369.806: 70.8767% ( 328) 00:09:54.663 9369.806 - 9422.445: 73.2784% ( 332) 00:09:54.663 9422.445 - 9475.084: 75.5281% ( 311) 00:09:54.663 9475.084 - 9527.724: 77.6982% ( 300) 00:09:54.663 9527.724 - 9580.363: 79.6296% ( 267) 00:09:54.663 9580.363 - 9633.002: 81.3151% ( 233) 00:09:54.663 9633.002 - 9685.642: 82.5810% ( 175) 00:09:54.663 9685.642 - 9738.281: 83.7023% ( 155) 00:09:54.663 9738.281 - 9790.920: 84.4690% ( 106) 00:09:54.663 9790.920 - 9843.560: 85.2648% ( 110) 00:09:54.663 9843.560 - 9896.199: 85.9520% ( 95) 00:09:54.663 9896.199 - 9948.839: 86.5813% ( 87) 00:09:54.663 9948.839 - 10001.478: 87.2106% ( 87) 00:09:54.663 10001.478 - 10054.117: 87.7749% ( 78) 00:09:54.663 10054.117 - 10106.757: 88.3825% ( 84) 00:09:54.663 10106.757 - 10159.396: 88.8527% ( 65) 00:09:54.663 10159.396 - 10212.035: 89.3157% ( 64) 00:09:54.663 10212.035 - 10264.675: 89.7931% ( 66) 00:09:54.663 10264.675 - 10317.314: 90.2127% ( 58) 00:09:54.663 10317.314 - 10369.953: 90.6322% ( 58) 00:09:54.663 10369.953 - 10422.593: 91.0012% ( 51) 00:09:54.663 10422.593 - 10475.232: 91.3701% ( 51) 00:09:54.663 10475.232 - 10527.871: 91.6522% ( 39) 00:09:54.663 10527.871 - 10580.511: 91.9850% ( 46) 00:09:54.663 10580.511 - 10633.150: 92.2888% ( 42) 00:09:54.663 10633.150 - 10685.790: 92.6143% ( 45) 00:09:54.663 10685.790 - 10738.429: 92.8747% ( 36) 00:09:54.663 10738.429 - 10791.068: 93.1568% ( 39) 00:09:54.663 10791.068 - 10843.708: 93.4462% ( 40) 00:09:54.663 10843.708 - 10896.347: 93.6632% ( 30) 00:09:54.663 10896.347 - 10948.986: 93.9091% ( 34) 00:09:54.663 10948.986 - 11001.626: 94.1045% ( 27) 00:09:54.663 11001.626 - 11054.265: 94.3215% ( 30) 00:09:54.663 11054.265 - 11106.904: 94.5023% ( 25) 00:09:54.663 11106.904 - 11159.544: 94.6904% ( 26) 00:09:54.663 11159.544 - 11212.183: 94.8929% ( 28) 00:09:54.663 11212.183 - 11264.822: 95.0159% ( 17) 00:09:54.663 11264.822 - 11317.462: 95.1823% ( 23) 00:09:54.663 11317.462 - 11370.101: 95.3053% ( 17) 00:09:54.663 11370.101 - 11422.741: 95.4789% ( 24) 00:09:54.663 11422.741 - 11475.380: 95.5729% ( 13) 00:09:54.663 11475.380 - 11528.019: 95.6742% ( 14) 00:09:54.663 11528.019 - 11580.659: 95.7755% ( 14) 00:09:54.663 11580.659 - 11633.298: 95.8478% ( 10) 00:09:54.663 11633.298 - 11685.937: 95.9274% ( 11) 00:09:54.663 11685.937 - 11738.577: 96.0069% ( 11) 00:09:54.663 11738.577 - 11791.216: 96.0938% ( 12) 00:09:54.663 11791.216 - 11843.855: 96.1733% ( 11) 00:09:54.663 11843.855 - 11896.495: 96.2529% ( 11) 00:09:54.663 11896.495 - 11949.134: 96.3397% ( 12) 00:09:54.663 11949.134 - 12001.773: 96.3976% ( 8) 00:09:54.663 12001.773 - 12054.413: 96.4627% ( 9) 00:09:54.663 12054.413 - 12107.052: 96.4988% ( 5) 00:09:54.663 12107.052 - 12159.692: 96.5278% ( 4) 00:09:54.663 12159.692 - 12212.331: 96.5422% ( 2) 00:09:54.663 12212.331 - 12264.970: 96.5639% ( 3) 00:09:54.663 12264.970 - 12317.610: 96.5712% ( 1) 00:09:54.663 12317.610 - 12370.249: 96.5856% ( 2) 00:09:54.663 12370.249 - 12422.888: 96.6073% ( 3) 00:09:54.663 12422.888 - 12475.528: 96.6291% ( 3) 00:09:54.663 12475.528 - 12528.167: 96.6435% ( 2) 00:09:54.663 12528.167 - 12580.806: 96.6580% ( 2) 00:09:54.663 12580.806 - 12633.446: 96.6725% ( 2) 00:09:54.663 12633.446 - 12686.085: 96.6869% ( 2) 00:09:54.663 12686.085 - 12738.724: 96.7014% ( 2) 00:09:54.663 12738.724 - 12791.364: 96.7159% ( 2) 00:09:54.664 12791.364 - 12844.003: 96.7376% ( 3) 00:09:54.664 12844.003 - 12896.643: 96.7520% ( 2) 00:09:54.664 12896.643 - 12949.282: 96.7665% ( 2) 00:09:54.664 12949.282 - 13001.921: 96.7810% ( 2) 00:09:54.664 13001.921 - 13054.561: 96.8027% ( 3) 00:09:54.664 13054.561 - 13107.200: 96.8099% ( 1) 00:09:54.664 13107.200 - 13159.839: 96.8316% ( 3) 00:09:54.664 13159.839 - 13212.479: 96.8388% ( 1) 00:09:54.664 13212.479 - 13265.118: 96.8605% ( 3) 00:09:54.664 13265.118 - 13317.757: 96.8750% ( 2) 00:09:54.664 13317.757 - 13370.397: 96.8895% ( 2) 00:09:54.664 13370.397 - 13423.036: 96.9112% ( 3) 00:09:54.664 13423.036 - 13475.676: 96.9256% ( 2) 00:09:54.664 13475.676 - 13580.954: 96.9546% ( 4) 00:09:54.664 13580.954 - 13686.233: 96.9907% ( 5) 00:09:54.664 13686.233 - 13791.512: 97.0197% ( 4) 00:09:54.664 13791.512 - 13896.790: 97.0414% ( 3) 00:09:54.664 13896.790 - 14002.069: 97.1065% ( 9) 00:09:54.664 14002.069 - 14107.348: 97.1788% ( 10) 00:09:54.664 14107.348 - 14212.627: 97.2656% ( 12) 00:09:54.664 14212.627 - 14317.905: 97.3163% ( 7) 00:09:54.664 14317.905 - 14423.184: 97.3886% ( 10) 00:09:54.664 14423.184 - 14528.463: 97.4392% ( 7) 00:09:54.664 14528.463 - 14633.741: 97.4826% ( 6) 00:09:54.664 14633.741 - 14739.020: 97.5188% ( 5) 00:09:54.664 14739.020 - 14844.299: 97.5550% ( 5) 00:09:54.664 14949.578 - 15054.856: 97.5911% ( 5) 00:09:54.664 15054.856 - 15160.135: 97.6418% ( 7) 00:09:54.664 15160.135 - 15265.414: 97.6635% ( 3) 00:09:54.664 15265.414 - 15370.692: 97.6997% ( 5) 00:09:54.664 15370.692 - 15475.971: 97.7358% ( 5) 00:09:54.664 15475.971 - 15581.250: 97.7720% ( 5) 00:09:54.664 15581.250 - 15686.529: 97.8226% ( 7) 00:09:54.664 15686.529 - 15791.807: 97.9022% ( 11) 00:09:54.664 15791.807 - 15897.086: 97.9745% ( 10) 00:09:54.664 15897.086 - 16002.365: 98.0541% ( 11) 00:09:54.664 16002.365 - 16107.643: 98.1264% ( 10) 00:09:54.664 16107.643 - 16212.922: 98.1988% ( 10) 00:09:54.664 16212.922 - 16318.201: 98.2784% ( 11) 00:09:54.664 16318.201 - 16423.480: 98.3579% ( 11) 00:09:54.664 16423.480 - 16528.758: 98.4303% ( 10) 00:09:54.664 16528.758 - 16634.037: 98.5026% ( 10) 00:09:54.664 16634.037 - 16739.316: 98.5749% ( 10) 00:09:54.664 16739.316 - 16844.594: 98.6328% ( 8) 00:09:54.664 16844.594 - 16949.873: 98.6762% ( 6) 00:09:54.664 16949.873 - 17055.152: 98.7196% ( 6) 00:09:54.664 17055.152 - 17160.431: 98.7630% ( 6) 00:09:54.664 17160.431 - 17265.709: 98.8064% ( 6) 00:09:54.664 17265.709 - 17370.988: 98.8354% ( 4) 00:09:54.664 17370.988 - 17476.267: 98.8788% ( 6) 00:09:54.664 17476.267 - 17581.545: 98.9222% ( 6) 00:09:54.664 17581.545 - 17686.824: 98.9583% ( 5) 00:09:54.664 17686.824 - 17792.103: 99.0017% ( 6) 00:09:54.664 17792.103 - 17897.382: 99.0451% ( 6) 00:09:54.664 17897.382 - 18002.660: 99.0741% ( 4) 00:09:54.664 28004.138 - 28214.696: 99.1392% ( 9) 00:09:54.664 28214.696 - 28425.253: 99.2332% ( 13) 00:09:54.664 28425.253 - 28635.810: 99.3200% ( 12) 00:09:54.664 28635.810 - 28846.368: 99.4068% ( 12) 00:09:54.664 28846.368 - 29056.925: 99.4936% ( 12) 00:09:54.664 29056.925 - 29267.483: 99.5877% ( 13) 00:09:54.664 29267.483 - 29478.040: 99.6672% ( 11) 00:09:54.664 29478.040 - 29688.598: 99.7468% ( 11) 00:09:54.664 29688.598 - 29899.155: 99.8336% ( 12) 00:09:54.664 29899.155 - 30109.712: 99.9277% ( 13) 00:09:54.664 30109.712 - 30320.270: 100.0000% ( 10) 00:09:54.664 00:09:54.664 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:54.664 ============================================================================== 00:09:54.664 Range in us Cumulative IO count 00:09:54.664 6369.362 - 6395.682: 0.0072% ( 1) 00:09:54.664 6395.682 - 6422.002: 0.0145% ( 1) 00:09:54.664 7158.953 - 7211.592: 0.0217% ( 1) 00:09:54.664 7211.592 - 7264.231: 0.0579% ( 5) 00:09:54.664 7264.231 - 7316.871: 0.0796% ( 3) 00:09:54.664 7316.871 - 7369.510: 0.1157% ( 5) 00:09:54.664 7369.510 - 7422.149: 0.1664% ( 7) 00:09:54.664 7422.149 - 7474.789: 0.2025% ( 5) 00:09:54.664 7474.789 - 7527.428: 0.2315% ( 4) 00:09:54.664 7527.428 - 7580.067: 0.2677% ( 5) 00:09:54.664 7580.067 - 7632.707: 0.3400% ( 10) 00:09:54.664 7632.707 - 7685.346: 0.7089% ( 51) 00:09:54.664 7685.346 - 7737.986: 1.4251% ( 99) 00:09:54.664 7737.986 - 7790.625: 2.7561% ( 184) 00:09:54.664 7790.625 - 7843.264: 4.4054% ( 228) 00:09:54.664 7843.264 - 7895.904: 6.2066% ( 249) 00:09:54.664 7895.904 - 7948.543: 8.3406% ( 295) 00:09:54.664 7948.543 - 8001.182: 10.4601% ( 293) 00:09:54.664 8001.182 - 8053.822: 12.6736% ( 306) 00:09:54.664 8053.822 - 8106.461: 14.8872% ( 306) 00:09:54.664 8106.461 - 8159.100: 17.1007% ( 306) 00:09:54.664 8159.100 - 8211.740: 19.4300% ( 322) 00:09:54.664 8211.740 - 8264.379: 21.8244% ( 331) 00:09:54.664 8264.379 - 8317.018: 24.1970% ( 328) 00:09:54.664 8317.018 - 8369.658: 26.5336% ( 323) 00:09:54.664 8369.658 - 8422.297: 28.9424% ( 333) 00:09:54.664 8422.297 - 8474.937: 31.4742% ( 350) 00:09:54.664 8474.937 - 8527.576: 33.8614% ( 330) 00:09:54.664 8527.576 - 8580.215: 36.3643% ( 346) 00:09:54.664 8580.215 - 8632.855: 38.9323% ( 355) 00:09:54.664 8632.855 - 8685.494: 41.5003% ( 355) 00:09:54.664 8685.494 - 8738.133: 44.0683% ( 355) 00:09:54.664 8738.133 - 8790.773: 46.5929% ( 349) 00:09:54.664 8790.773 - 8843.412: 49.2043% ( 361) 00:09:54.664 8843.412 - 8896.051: 51.8084% ( 360) 00:09:54.664 8896.051 - 8948.691: 54.3981% ( 358) 00:09:54.664 8948.691 - 9001.330: 57.0312% ( 364) 00:09:54.664 9001.330 - 9053.969: 59.7656% ( 378) 00:09:54.664 9053.969 - 9106.609: 62.3987% ( 364) 00:09:54.664 9106.609 - 9159.248: 65.0825% ( 371) 00:09:54.664 9159.248 - 9211.888: 67.7373% ( 367) 00:09:54.664 9211.888 - 9264.527: 70.4138% ( 370) 00:09:54.664 9264.527 - 9317.166: 73.0830% ( 369) 00:09:54.664 9317.166 - 9369.806: 75.5281% ( 338) 00:09:54.664 9369.806 - 9422.445: 77.8284% ( 318) 00:09:54.664 9422.445 - 9475.084: 79.7381% ( 264) 00:09:54.664 9475.084 - 9527.724: 81.2789% ( 213) 00:09:54.664 9527.724 - 9580.363: 82.5883% ( 181) 00:09:54.664 9580.363 - 9633.002: 83.6661% ( 149) 00:09:54.664 9633.002 - 9685.642: 84.5486% ( 122) 00:09:54.664 9685.642 - 9738.281: 85.3299% ( 108) 00:09:54.664 9738.281 - 9790.920: 86.1183% ( 109) 00:09:54.664 9790.920 - 9843.560: 86.8851% ( 106) 00:09:54.664 9843.560 - 9896.199: 87.5362% ( 90) 00:09:54.664 9896.199 - 9948.839: 88.1583% ( 86) 00:09:54.664 9948.839 - 10001.478: 88.6863% ( 73) 00:09:54.664 10001.478 - 10054.117: 89.2216% ( 74) 00:09:54.664 10054.117 - 10106.757: 89.6701% ( 62) 00:09:54.664 10106.757 - 10159.396: 90.1042% ( 60) 00:09:54.664 10159.396 - 10212.035: 90.5165% ( 57) 00:09:54.664 10212.035 - 10264.675: 90.9433% ( 59) 00:09:54.664 10264.675 - 10317.314: 91.3122% ( 51) 00:09:54.664 10317.314 - 10369.953: 91.5943% ( 39) 00:09:54.664 10369.953 - 10422.593: 91.9054% ( 43) 00:09:54.664 10422.593 - 10475.232: 92.2092% ( 42) 00:09:54.664 10475.232 - 10527.871: 92.4769% ( 37) 00:09:54.664 10527.871 - 10580.511: 92.7590% ( 39) 00:09:54.664 10580.511 - 10633.150: 93.0049% ( 34) 00:09:54.664 10633.150 - 10685.790: 93.2653% ( 36) 00:09:54.664 10685.790 - 10738.429: 93.5185% ( 35) 00:09:54.664 10738.429 - 10791.068: 93.7862% ( 37) 00:09:54.664 10791.068 - 10843.708: 94.0104% ( 31) 00:09:54.664 10843.708 - 10896.347: 94.1840% ( 24) 00:09:54.664 10896.347 - 10948.986: 94.3359% ( 21) 00:09:54.664 10948.986 - 11001.626: 94.4806% ( 20) 00:09:54.664 11001.626 - 11054.265: 94.6181% ( 19) 00:09:54.664 11054.265 - 11106.904: 94.7555% ( 19) 00:09:54.664 11106.904 - 11159.544: 94.9219% ( 23) 00:09:54.664 11159.544 - 11212.183: 95.0810% ( 22) 00:09:54.664 11212.183 - 11264.822: 95.2185% ( 19) 00:09:54.664 11264.822 - 11317.462: 95.3631% ( 20) 00:09:54.664 11317.462 - 11370.101: 95.5078% ( 20) 00:09:54.664 11370.101 - 11422.741: 95.6380% ( 18) 00:09:54.664 11422.741 - 11475.380: 95.7104% ( 10) 00:09:54.664 11475.380 - 11528.019: 95.7899% ( 11) 00:09:54.664 11528.019 - 11580.659: 95.8550% ( 9) 00:09:54.664 11580.659 - 11633.298: 95.9274% ( 10) 00:09:54.664 11633.298 - 11685.937: 95.9925% ( 9) 00:09:54.664 11685.937 - 11738.577: 96.0286% ( 5) 00:09:54.664 11738.577 - 11791.216: 96.0648% ( 5) 00:09:54.664 11791.216 - 11843.855: 96.1155% ( 7) 00:09:54.664 11843.855 - 11896.495: 96.1589% ( 6) 00:09:54.664 11896.495 - 11949.134: 96.2023% ( 6) 00:09:54.664 11949.134 - 12001.773: 96.2529% ( 7) 00:09:54.664 12001.773 - 12054.413: 96.2963% ( 6) 00:09:54.664 12054.413 - 12107.052: 96.3397% ( 6) 00:09:54.664 12107.052 - 12159.692: 96.3831% ( 6) 00:09:54.664 12159.692 - 12212.331: 96.4265% ( 6) 00:09:54.664 12212.331 - 12264.970: 96.4627% ( 5) 00:09:54.664 12264.970 - 12317.610: 96.4988% ( 5) 00:09:54.664 12317.610 - 12370.249: 96.5495% ( 7) 00:09:54.664 12370.249 - 12422.888: 96.5929% ( 6) 00:09:54.664 12422.888 - 12475.528: 96.6363% ( 6) 00:09:54.664 12475.528 - 12528.167: 96.6869% ( 7) 00:09:54.664 12528.167 - 12580.806: 96.7231% ( 5) 00:09:54.664 12580.806 - 12633.446: 96.7665% ( 6) 00:09:54.664 12633.446 - 12686.085: 96.8027% ( 5) 00:09:54.664 12686.085 - 12738.724: 96.8171% ( 2) 00:09:54.664 12738.724 - 12791.364: 96.8388% ( 3) 00:09:54.664 12791.364 - 12844.003: 96.8605% ( 3) 00:09:54.664 12844.003 - 12896.643: 96.8750% ( 2) 00:09:54.664 12896.643 - 12949.282: 96.8967% ( 3) 00:09:54.664 12949.282 - 13001.921: 96.9112% ( 2) 00:09:54.664 13001.921 - 13054.561: 96.9256% ( 2) 00:09:54.664 13054.561 - 13107.200: 96.9473% ( 3) 00:09:54.664 13107.200 - 13159.839: 96.9618% ( 2) 00:09:54.664 13159.839 - 13212.479: 96.9835% ( 3) 00:09:54.664 13212.479 - 13265.118: 96.9980% ( 2) 00:09:54.664 13265.118 - 13317.757: 97.0124% ( 2) 00:09:54.664 13317.757 - 13370.397: 97.0341% ( 3) 00:09:54.665 13370.397 - 13423.036: 97.0558% ( 3) 00:09:54.665 13423.036 - 13475.676: 97.0703% ( 2) 00:09:54.665 13475.676 - 13580.954: 97.1137% ( 6) 00:09:54.665 13580.954 - 13686.233: 97.1499% ( 5) 00:09:54.665 13686.233 - 13791.512: 97.1861% ( 5) 00:09:54.665 13791.512 - 13896.790: 97.2439% ( 8) 00:09:54.665 13896.790 - 14002.069: 97.2801% ( 5) 00:09:54.665 14002.069 - 14107.348: 97.3235% ( 6) 00:09:54.665 14107.348 - 14212.627: 97.3597% ( 5) 00:09:54.665 14212.627 - 14317.905: 97.4031% ( 6) 00:09:54.665 14317.905 - 14423.184: 97.4392% ( 5) 00:09:54.665 14423.184 - 14528.463: 97.4754% ( 5) 00:09:54.665 14528.463 - 14633.741: 97.5043% ( 4) 00:09:54.665 14633.741 - 14739.020: 97.5477% ( 6) 00:09:54.665 14739.020 - 14844.299: 97.5911% ( 6) 00:09:54.665 14844.299 - 14949.578: 97.6273% ( 5) 00:09:54.665 14949.578 - 15054.856: 97.6635% ( 5) 00:09:54.665 15054.856 - 15160.135: 97.6997% ( 5) 00:09:54.665 15160.135 - 15265.414: 97.7358% ( 5) 00:09:54.665 15265.414 - 15370.692: 97.7720% ( 5) 00:09:54.665 15370.692 - 15475.971: 97.8154% ( 6) 00:09:54.665 15475.971 - 15581.250: 97.8588% ( 6) 00:09:54.665 15581.250 - 15686.529: 97.9022% ( 6) 00:09:54.665 15686.529 - 15791.807: 97.9384% ( 5) 00:09:54.665 15791.807 - 15897.086: 97.9818% ( 6) 00:09:54.665 15897.086 - 16002.365: 98.0179% ( 5) 00:09:54.665 16002.365 - 16107.643: 98.0541% ( 5) 00:09:54.665 16107.643 - 16212.922: 98.0830% ( 4) 00:09:54.665 16212.922 - 16318.201: 98.1192% ( 5) 00:09:54.665 16318.201 - 16423.480: 98.1481% ( 4) 00:09:54.665 16634.037 - 16739.316: 98.1988% ( 7) 00:09:54.665 16739.316 - 16844.594: 98.2494% ( 7) 00:09:54.665 16844.594 - 16949.873: 98.3001% ( 7) 00:09:54.665 16949.873 - 17055.152: 98.3579% ( 8) 00:09:54.665 17055.152 - 17160.431: 98.4086% ( 7) 00:09:54.665 17160.431 - 17265.709: 98.4592% ( 7) 00:09:54.665 17265.709 - 17370.988: 98.5098% ( 7) 00:09:54.665 17370.988 - 17476.267: 98.5605% ( 7) 00:09:54.665 17476.267 - 17581.545: 98.6111% ( 7) 00:09:54.665 17581.545 - 17686.824: 98.6617% ( 7) 00:09:54.665 17686.824 - 17792.103: 98.7052% ( 6) 00:09:54.665 17792.103 - 17897.382: 98.7558% ( 7) 00:09:54.665 17897.382 - 18002.660: 98.8064% ( 7) 00:09:54.665 18002.660 - 18107.939: 98.8498% ( 6) 00:09:54.665 18107.939 - 18213.218: 98.9005% ( 7) 00:09:54.665 18213.218 - 18318.496: 98.9439% ( 6) 00:09:54.665 18318.496 - 18423.775: 98.9945% ( 7) 00:09:54.665 18423.775 - 18529.054: 99.0451% ( 7) 00:09:54.665 18529.054 - 18634.333: 99.0741% ( 4) 00:09:54.665 28635.810 - 28846.368: 99.1102% ( 5) 00:09:54.665 28846.368 - 29056.925: 99.2115% ( 14) 00:09:54.665 29056.925 - 29267.483: 99.3128% ( 14) 00:09:54.665 29267.483 - 29478.040: 99.3996% ( 12) 00:09:54.665 29478.040 - 29688.598: 99.5081% ( 15) 00:09:54.665 29688.598 - 29899.155: 99.5949% ( 12) 00:09:54.665 29899.155 - 30109.712: 99.6672% ( 10) 00:09:54.665 30109.712 - 30320.270: 99.7685% ( 14) 00:09:54.665 30320.270 - 30530.827: 99.8698% ( 14) 00:09:54.665 30530.827 - 30741.385: 99.9566% ( 12) 00:09:54.665 30741.385 - 30951.942: 100.0000% ( 6) 00:09:54.665 00:09:54.665 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:54.665 ============================================================================== 00:09:54.665 Range in us Cumulative IO count 00:09:54.665 5606.092 - 5632.411: 0.0072% ( 1) 00:09:54.665 5632.411 - 5658.731: 0.0434% ( 5) 00:09:54.665 5658.731 - 5685.051: 0.0651% ( 3) 00:09:54.665 5685.051 - 5711.370: 0.0723% ( 1) 00:09:54.665 5711.370 - 5737.690: 0.0940% ( 3) 00:09:54.665 5737.690 - 5764.010: 0.1085% ( 2) 00:09:54.665 5764.010 - 5790.329: 0.1230% ( 2) 00:09:54.665 5790.329 - 5816.649: 0.1374% ( 2) 00:09:54.665 5816.649 - 5842.969: 0.1591% ( 3) 00:09:54.665 5842.969 - 5869.288: 0.1736% ( 2) 00:09:54.665 5869.288 - 5895.608: 0.1881% ( 2) 00:09:54.665 5895.608 - 5921.928: 0.2098% ( 3) 00:09:54.665 5921.928 - 5948.247: 0.2242% ( 2) 00:09:54.665 5948.247 - 5974.567: 0.2387% ( 2) 00:09:54.665 5974.567 - 6000.887: 0.2532% ( 2) 00:09:54.665 6000.887 - 6027.206: 0.2677% ( 2) 00:09:54.665 6027.206 - 6053.526: 0.2821% ( 2) 00:09:54.665 6053.526 - 6079.846: 0.2966% ( 2) 00:09:54.665 6079.846 - 6106.165: 0.3183% ( 3) 00:09:54.665 6106.165 - 6132.485: 0.3328% ( 2) 00:09:54.665 6132.485 - 6158.805: 0.3472% ( 2) 00:09:54.665 6158.805 - 6185.124: 0.3617% ( 2) 00:09:54.665 6185.124 - 6211.444: 0.3834% ( 3) 00:09:54.665 6211.444 - 6237.764: 0.3979% ( 2) 00:09:54.665 6237.764 - 6264.084: 0.4051% ( 1) 00:09:54.665 6264.084 - 6290.403: 0.4196% ( 2) 00:09:54.665 6290.403 - 6316.723: 0.4413% ( 3) 00:09:54.665 6316.723 - 6343.043: 0.4557% ( 2) 00:09:54.665 6343.043 - 6369.362: 0.4774% ( 3) 00:09:54.665 6369.362 - 6395.682: 0.4919% ( 2) 00:09:54.665 6395.682 - 6422.002: 0.5136% ( 3) 00:09:54.665 6422.002 - 6448.321: 0.5281% ( 2) 00:09:54.665 6448.321 - 6474.641: 0.5425% ( 2) 00:09:54.665 6474.641 - 6500.961: 0.5570% ( 2) 00:09:54.665 6500.961 - 6527.280: 0.5787% ( 3) 00:09:54.665 6527.280 - 6553.600: 0.5932% ( 2) 00:09:54.665 6553.600 - 6579.920: 0.6076% ( 2) 00:09:54.665 6579.920 - 6606.239: 0.6293% ( 3) 00:09:54.665 6606.239 - 6632.559: 0.6438% ( 2) 00:09:54.665 6632.559 - 6658.879: 0.6583% ( 2) 00:09:54.665 6658.879 - 6685.198: 0.6727% ( 2) 00:09:54.665 6685.198 - 6711.518: 0.6944% ( 3) 00:09:54.665 6711.518 - 6737.838: 0.7089% ( 2) 00:09:54.665 6737.838 - 6790.477: 0.7451% ( 5) 00:09:54.665 6790.477 - 6843.116: 0.7812% ( 5) 00:09:54.665 6843.116 - 6895.756: 0.8102% ( 4) 00:09:54.665 6895.756 - 6948.395: 0.8391% ( 4) 00:09:54.665 6948.395 - 7001.035: 0.8753% ( 5) 00:09:54.665 7001.035 - 7053.674: 0.9115% ( 5) 00:09:54.665 7053.674 - 7106.313: 0.9259% ( 2) 00:09:54.665 7527.428 - 7580.067: 0.9332% ( 1) 00:09:54.665 7580.067 - 7632.707: 1.0417% ( 15) 00:09:54.665 7632.707 - 7685.346: 1.3455% ( 42) 00:09:54.665 7685.346 - 7737.986: 1.9965% ( 90) 00:09:54.665 7737.986 - 7790.625: 3.0310% ( 143) 00:09:54.665 7790.625 - 7843.264: 4.5645% ( 212) 00:09:54.665 7843.264 - 7895.904: 6.2862% ( 238) 00:09:54.665 7895.904 - 7948.543: 8.3333% ( 283) 00:09:54.665 7948.543 - 8001.182: 10.5179% ( 302) 00:09:54.665 8001.182 - 8053.822: 12.7459% ( 308) 00:09:54.665 8053.822 - 8106.461: 14.9233% ( 301) 00:09:54.665 8106.461 - 8159.100: 17.1730% ( 311) 00:09:54.665 8159.100 - 8211.740: 19.4951% ( 321) 00:09:54.665 8211.740 - 8264.379: 21.8533% ( 326) 00:09:54.665 8264.379 - 8317.018: 24.2115% ( 326) 00:09:54.665 8317.018 - 8369.658: 26.6131% ( 332) 00:09:54.665 8369.658 - 8422.297: 29.0437% ( 336) 00:09:54.665 8422.297 - 8474.937: 31.4308% ( 330) 00:09:54.665 8474.937 - 8527.576: 33.8759% ( 338) 00:09:54.665 8527.576 - 8580.215: 36.4077% ( 350) 00:09:54.665 8580.215 - 8632.855: 38.9178% ( 347) 00:09:54.665 8632.855 - 8685.494: 41.5871% ( 369) 00:09:54.665 8685.494 - 8738.133: 44.1840% ( 359) 00:09:54.665 8738.133 - 8790.773: 46.7665% ( 357) 00:09:54.665 8790.773 - 8843.412: 49.3634% ( 359) 00:09:54.665 8843.412 - 8896.051: 52.0110% ( 366) 00:09:54.665 8896.051 - 8948.691: 54.6441% ( 364) 00:09:54.665 8948.691 - 9001.330: 57.2555% ( 361) 00:09:54.665 9001.330 - 9053.969: 59.8814% ( 363) 00:09:54.665 9053.969 - 9106.609: 62.5651% ( 371) 00:09:54.665 9106.609 - 9159.248: 65.2199% ( 367) 00:09:54.665 9159.248 - 9211.888: 67.9543% ( 378) 00:09:54.665 9211.888 - 9264.527: 70.7321% ( 384) 00:09:54.665 9264.527 - 9317.166: 73.3652% ( 364) 00:09:54.665 9317.166 - 9369.806: 75.7595% ( 331) 00:09:54.665 9369.806 - 9422.445: 77.9080% ( 297) 00:09:54.665 9422.445 - 9475.084: 79.7815% ( 259) 00:09:54.665 9475.084 - 9527.724: 81.4019% ( 224) 00:09:54.665 9527.724 - 9580.363: 82.6823% ( 177) 00:09:54.665 9580.363 - 9633.002: 83.7023% ( 141) 00:09:54.665 9633.002 - 9685.642: 84.6354% ( 129) 00:09:54.665 9685.642 - 9738.281: 85.4818% ( 117) 00:09:54.665 9738.281 - 9790.920: 86.2775% ( 110) 00:09:54.665 9790.920 - 9843.560: 86.8996% ( 86) 00:09:54.665 9843.560 - 9896.199: 87.4711% ( 79) 00:09:54.665 9896.199 - 9948.839: 88.0281% ( 77) 00:09:54.665 9948.839 - 10001.478: 88.5055% ( 66) 00:09:54.665 10001.478 - 10054.117: 88.9974% ( 68) 00:09:54.665 10054.117 - 10106.757: 89.4965% ( 69) 00:09:54.665 10106.757 - 10159.396: 89.9306% ( 60) 00:09:54.665 10159.396 - 10212.035: 90.2995% ( 51) 00:09:54.665 10212.035 - 10264.675: 90.6539% ( 49) 00:09:54.665 10264.675 - 10317.314: 90.9578% ( 42) 00:09:54.666 10317.314 - 10369.953: 91.2471% ( 40) 00:09:54.666 10369.953 - 10422.593: 91.5075% ( 36) 00:09:54.666 10422.593 - 10475.232: 91.7535% ( 34) 00:09:54.666 10475.232 - 10527.871: 92.0067% ( 35) 00:09:54.666 10527.871 - 10580.511: 92.2598% ( 35) 00:09:54.666 10580.511 - 10633.150: 92.5058% ( 34) 00:09:54.666 10633.150 - 10685.790: 92.7445% ( 33) 00:09:54.666 10685.790 - 10738.429: 92.9760% ( 32) 00:09:54.666 10738.429 - 10791.068: 93.2364% ( 36) 00:09:54.666 10791.068 - 10843.708: 93.4462% ( 29) 00:09:54.666 10843.708 - 10896.347: 93.6487% ( 28) 00:09:54.666 10896.347 - 10948.986: 93.8585% ( 29) 00:09:54.666 10948.986 - 11001.626: 94.0394% ( 25) 00:09:54.666 11001.626 - 11054.265: 94.2419% ( 28) 00:09:54.666 11054.265 - 11106.904: 94.4227% ( 25) 00:09:54.666 11106.904 - 11159.544: 94.5602% ( 19) 00:09:54.666 11159.544 - 11212.183: 94.7193% ( 22) 00:09:54.666 11212.183 - 11264.822: 94.8278% ( 15) 00:09:54.666 11264.822 - 11317.462: 94.9580% ( 18) 00:09:54.666 11317.462 - 11370.101: 95.0666% ( 15) 00:09:54.666 11370.101 - 11422.741: 95.1534% ( 12) 00:09:54.666 11422.741 - 11475.380: 95.2329% ( 11) 00:09:54.666 11475.380 - 11528.019: 95.2980% ( 9) 00:09:54.666 11528.019 - 11580.659: 95.3704% ( 10) 00:09:54.666 11580.659 - 11633.298: 95.4644% ( 13) 00:09:54.666 11633.298 - 11685.937: 95.5440% ( 11) 00:09:54.666 11685.937 - 11738.577: 95.6163% ( 10) 00:09:54.666 11738.577 - 11791.216: 95.6959% ( 11) 00:09:54.666 11791.216 - 11843.855: 95.7899% ( 13) 00:09:54.666 11843.855 - 11896.495: 95.8912% ( 14) 00:09:54.666 11896.495 - 11949.134: 95.9925% ( 14) 00:09:54.666 11949.134 - 12001.773: 96.1010% ( 15) 00:09:54.666 12001.773 - 12054.413: 96.1806% ( 11) 00:09:54.666 12054.413 - 12107.052: 96.2240% ( 6) 00:09:54.666 12107.052 - 12159.692: 96.2746% ( 7) 00:09:54.666 12159.692 - 12212.331: 96.3180% ( 6) 00:09:54.666 12212.331 - 12264.970: 96.3686% ( 7) 00:09:54.666 12264.970 - 12317.610: 96.4048% ( 5) 00:09:54.666 12317.610 - 12370.249: 96.4482% ( 6) 00:09:54.666 12370.249 - 12422.888: 96.4916% ( 6) 00:09:54.666 12422.888 - 12475.528: 96.5422% ( 7) 00:09:54.666 12475.528 - 12528.167: 96.5929% ( 7) 00:09:54.666 12528.167 - 12580.806: 96.6363% ( 6) 00:09:54.666 12580.806 - 12633.446: 96.6797% ( 6) 00:09:54.666 12633.446 - 12686.085: 96.7086% ( 4) 00:09:54.666 12686.085 - 12738.724: 96.7303% ( 3) 00:09:54.666 12738.724 - 12791.364: 96.7593% ( 4) 00:09:54.666 12791.364 - 12844.003: 96.7882% ( 4) 00:09:54.666 12844.003 - 12896.643: 96.8316% ( 6) 00:09:54.666 12896.643 - 12949.282: 96.8678% ( 5) 00:09:54.666 12949.282 - 13001.921: 96.9184% ( 7) 00:09:54.666 13001.921 - 13054.561: 96.9618% ( 6) 00:09:54.666 13054.561 - 13107.200: 96.9980% ( 5) 00:09:54.666 13107.200 - 13159.839: 97.0414% ( 6) 00:09:54.666 13159.839 - 13212.479: 97.0848% ( 6) 00:09:54.666 13212.479 - 13265.118: 97.1137% ( 4) 00:09:54.666 13265.118 - 13317.757: 97.1571% ( 6) 00:09:54.666 13317.757 - 13370.397: 97.1933% ( 5) 00:09:54.666 13370.397 - 13423.036: 97.2295% ( 5) 00:09:54.666 13423.036 - 13475.676: 97.2584% ( 4) 00:09:54.666 13475.676 - 13580.954: 97.3452% ( 12) 00:09:54.666 13580.954 - 13686.233: 97.4248% ( 11) 00:09:54.666 13686.233 - 13791.512: 97.4971% ( 10) 00:09:54.666 13791.512 - 13896.790: 97.5839% ( 12) 00:09:54.666 13896.790 - 14002.069: 97.6201% ( 5) 00:09:54.666 14002.069 - 14107.348: 97.6562% ( 5) 00:09:54.666 14107.348 - 14212.627: 97.6997% ( 6) 00:09:54.666 14212.627 - 14317.905: 97.7358% ( 5) 00:09:54.666 14317.905 - 14423.184: 97.7792% ( 6) 00:09:54.666 14423.184 - 14528.463: 97.8154% ( 5) 00:09:54.666 14528.463 - 14633.741: 97.8516% ( 5) 00:09:54.666 14633.741 - 14739.020: 97.8877% ( 5) 00:09:54.666 14739.020 - 14844.299: 97.9239% ( 5) 00:09:54.666 14844.299 - 14949.578: 97.9673% ( 6) 00:09:54.666 14949.578 - 15054.856: 98.0107% ( 6) 00:09:54.666 15054.856 - 15160.135: 98.0469% ( 5) 00:09:54.666 15160.135 - 15265.414: 98.0903% ( 6) 00:09:54.666 15265.414 - 15370.692: 98.1264% ( 5) 00:09:54.666 15370.692 - 15475.971: 98.1481% ( 3) 00:09:54.666 15475.971 - 15581.250: 98.1916% ( 6) 00:09:54.666 15581.250 - 15686.529: 98.2422% ( 7) 00:09:54.666 15686.529 - 15791.807: 98.2928% ( 7) 00:09:54.666 15791.807 - 15897.086: 98.3435% ( 7) 00:09:54.666 15897.086 - 16002.365: 98.3941% ( 7) 00:09:54.666 16002.365 - 16107.643: 98.4447% ( 7) 00:09:54.666 16107.643 - 16212.922: 98.4881% ( 6) 00:09:54.666 16212.922 - 16318.201: 98.5388% ( 7) 00:09:54.666 16318.201 - 16423.480: 98.5894% ( 7) 00:09:54.666 16423.480 - 16528.758: 98.6400% ( 7) 00:09:54.666 16528.758 - 16634.037: 98.6907% ( 7) 00:09:54.666 16634.037 - 16739.316: 98.7413% ( 7) 00:09:54.666 16739.316 - 16844.594: 98.7920% ( 7) 00:09:54.666 16844.594 - 16949.873: 98.8426% ( 7) 00:09:54.666 16949.873 - 17055.152: 98.8788% ( 5) 00:09:54.666 17055.152 - 17160.431: 98.9222% ( 6) 00:09:54.666 17160.431 - 17265.709: 98.9656% ( 6) 00:09:54.666 17265.709 - 17370.988: 99.0090% ( 6) 00:09:54.666 17370.988 - 17476.267: 99.0451% ( 5) 00:09:54.666 17476.267 - 17581.545: 99.0741% ( 4) 00:09:54.666 30741.385 - 30951.942: 99.1247% ( 7) 00:09:54.666 30951.942 - 31162.500: 99.2260% ( 14) 00:09:54.666 31162.500 - 31373.057: 99.3273% ( 14) 00:09:54.666 31373.057 - 31583.614: 99.4285% ( 14) 00:09:54.666 31583.614 - 31794.172: 99.5226% ( 13) 00:09:54.666 31794.172 - 32004.729: 99.6166% ( 13) 00:09:54.666 32004.729 - 32215.287: 99.7106% ( 13) 00:09:54.666 32215.287 - 32425.844: 99.8119% ( 14) 00:09:54.666 32425.844 - 32636.402: 99.9132% ( 14) 00:09:54.666 32636.402 - 32846.959: 100.0000% ( 12) 00:09:54.666 00:09:54.666 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:54.666 ============================================================================== 00:09:54.666 Range in us Cumulative IO count 00:09:54.666 4895.460 - 4921.780: 0.0145% ( 2) 00:09:54.666 4921.780 - 4948.100: 0.0289% ( 2) 00:09:54.666 4948.100 - 4974.419: 0.0506% ( 3) 00:09:54.666 4974.419 - 5000.739: 0.0723% ( 3) 00:09:54.666 5000.739 - 5027.059: 0.0940% ( 3) 00:09:54.666 5027.059 - 5053.378: 0.1157% ( 3) 00:09:54.666 5053.378 - 5079.698: 0.1302% ( 2) 00:09:54.666 5079.698 - 5106.018: 0.1447% ( 2) 00:09:54.666 5106.018 - 5132.337: 0.1664% ( 3) 00:09:54.666 5132.337 - 5158.657: 0.1808% ( 2) 00:09:54.666 5158.657 - 5184.977: 0.1953% ( 2) 00:09:54.666 5184.977 - 5211.296: 0.2098% ( 2) 00:09:54.666 5211.296 - 5237.616: 0.2242% ( 2) 00:09:54.666 5237.616 - 5263.936: 0.2387% ( 2) 00:09:54.666 5263.936 - 5290.255: 0.2604% ( 3) 00:09:54.666 5290.255 - 5316.575: 0.2749% ( 2) 00:09:54.666 5316.575 - 5342.895: 0.2894% ( 2) 00:09:54.666 5342.895 - 5369.214: 0.3111% ( 3) 00:09:54.666 5369.214 - 5395.534: 0.3255% ( 2) 00:09:54.666 5395.534 - 5421.854: 0.3400% ( 2) 00:09:54.666 5421.854 - 5448.173: 0.3545% ( 2) 00:09:54.666 5448.173 - 5474.493: 0.3762% ( 3) 00:09:54.666 5474.493 - 5500.813: 0.3906% ( 2) 00:09:54.666 5500.813 - 5527.133: 0.4051% ( 2) 00:09:54.666 5527.133 - 5553.452: 0.4268% ( 3) 00:09:54.666 5553.452 - 5579.772: 0.4413% ( 2) 00:09:54.666 5579.772 - 5606.092: 0.4557% ( 2) 00:09:54.666 5606.092 - 5632.411: 0.4702% ( 2) 00:09:54.666 5632.411 - 5658.731: 0.4847% ( 2) 00:09:54.666 5658.731 - 5685.051: 0.5064% ( 3) 00:09:54.666 5685.051 - 5711.370: 0.5208% ( 2) 00:09:54.666 5711.370 - 5737.690: 0.5353% ( 2) 00:09:54.666 5737.690 - 5764.010: 0.5570% ( 3) 00:09:54.666 5764.010 - 5790.329: 0.5642% ( 1) 00:09:54.666 5790.329 - 5816.649: 0.5859% ( 3) 00:09:54.667 5816.649 - 5842.969: 0.6004% ( 2) 00:09:54.667 5842.969 - 5869.288: 0.6149% ( 2) 00:09:54.667 5869.288 - 5895.608: 0.6366% ( 3) 00:09:54.667 5895.608 - 5921.928: 0.6510% ( 2) 00:09:54.667 5921.928 - 5948.247: 0.6655% ( 2) 00:09:54.667 5948.247 - 5974.567: 0.6872% ( 3) 00:09:54.667 5974.567 - 6000.887: 0.7017% ( 2) 00:09:54.667 6000.887 - 6027.206: 0.7161% ( 2) 00:09:54.667 6027.206 - 6053.526: 0.7378% ( 3) 00:09:54.667 6053.526 - 6079.846: 0.7523% ( 2) 00:09:54.667 6079.846 - 6106.165: 0.7668% ( 2) 00:09:54.667 6106.165 - 6132.485: 0.7885% ( 3) 00:09:54.667 6132.485 - 6158.805: 0.8030% ( 2) 00:09:54.667 6158.805 - 6185.124: 0.8247% ( 3) 00:09:54.667 6185.124 - 6211.444: 0.8391% ( 2) 00:09:54.667 6211.444 - 6237.764: 0.8536% ( 2) 00:09:54.667 6237.764 - 6264.084: 0.8681% ( 2) 00:09:54.667 6264.084 - 6290.403: 0.8825% ( 2) 00:09:54.667 6290.403 - 6316.723: 0.8970% ( 2) 00:09:54.667 6316.723 - 6343.043: 0.9115% ( 2) 00:09:54.667 6343.043 - 6369.362: 0.9259% ( 2) 00:09:54.667 7527.428 - 7580.067: 0.9621% ( 5) 00:09:54.667 7580.067 - 7632.707: 1.0634% ( 14) 00:09:54.667 7632.707 - 7685.346: 1.3093% ( 34) 00:09:54.667 7685.346 - 7737.986: 1.8953% ( 81) 00:09:54.667 7737.986 - 7790.625: 2.8935% ( 138) 00:09:54.667 7790.625 - 7843.264: 4.4922% ( 221) 00:09:54.667 7843.264 - 7895.904: 6.3513% ( 257) 00:09:54.667 7895.904 - 7948.543: 8.3261% ( 273) 00:09:54.667 7948.543 - 8001.182: 10.5396% ( 306) 00:09:54.667 8001.182 - 8053.822: 12.7025% ( 299) 00:09:54.667 8053.822 - 8106.461: 14.8727% ( 300) 00:09:54.667 8106.461 - 8159.100: 17.1369% ( 313) 00:09:54.667 8159.100 - 8211.740: 19.3576% ( 307) 00:09:54.667 8211.740 - 8264.379: 21.6146% ( 312) 00:09:54.667 8264.379 - 8317.018: 23.9149% ( 318) 00:09:54.667 8317.018 - 8369.658: 26.2804% ( 327) 00:09:54.667 8369.658 - 8422.297: 28.7905% ( 347) 00:09:54.667 8422.297 - 8474.937: 31.2789% ( 344) 00:09:54.667 8474.937 - 8527.576: 33.7384% ( 340) 00:09:54.667 8527.576 - 8580.215: 36.3354% ( 359) 00:09:54.667 8580.215 - 8632.855: 38.9251% ( 358) 00:09:54.667 8632.855 - 8685.494: 41.4786% ( 353) 00:09:54.667 8685.494 - 8738.133: 44.1045% ( 363) 00:09:54.667 8738.133 - 8790.773: 46.7303% ( 363) 00:09:54.667 8790.773 - 8843.412: 49.3056% ( 356) 00:09:54.667 8843.412 - 8896.051: 51.9097% ( 360) 00:09:54.667 8896.051 - 8948.691: 54.6152% ( 374) 00:09:54.667 8948.691 - 9001.330: 57.2555% ( 365) 00:09:54.667 9001.330 - 9053.969: 59.8524% ( 359) 00:09:54.667 9053.969 - 9106.609: 62.5362% ( 371) 00:09:54.667 9106.609 - 9159.248: 65.2995% ( 382) 00:09:54.667 9159.248 - 9211.888: 68.0194% ( 376) 00:09:54.667 9211.888 - 9264.527: 70.6380% ( 362) 00:09:54.667 9264.527 - 9317.166: 73.2494% ( 361) 00:09:54.667 9317.166 - 9369.806: 75.7306% ( 343) 00:09:54.667 9369.806 - 9422.445: 78.0237% ( 317) 00:09:54.667 9422.445 - 9475.084: 79.9045% ( 260) 00:09:54.667 9475.084 - 9527.724: 81.5538% ( 228) 00:09:54.667 9527.724 - 9580.363: 82.8487% ( 179) 00:09:54.667 9580.363 - 9633.002: 83.9265% ( 149) 00:09:54.667 9633.002 - 9685.642: 84.8452% ( 127) 00:09:54.667 9685.642 - 9738.281: 85.6192% ( 107) 00:09:54.667 9738.281 - 9790.920: 86.3498% ( 101) 00:09:54.667 9790.920 - 9843.560: 87.0443% ( 96) 00:09:54.667 9843.560 - 9896.199: 87.6664% ( 86) 00:09:54.667 9896.199 - 9948.839: 88.2017% ( 74) 00:09:54.667 9948.839 - 10001.478: 88.7587% ( 77) 00:09:54.667 10001.478 - 10054.117: 89.3012% ( 75) 00:09:54.667 10054.117 - 10106.757: 89.7859% ( 67) 00:09:54.667 10106.757 - 10159.396: 90.2778% ( 68) 00:09:54.667 10159.396 - 10212.035: 90.7190% ( 61) 00:09:54.667 10212.035 - 10264.675: 91.1458% ( 59) 00:09:54.667 10264.675 - 10317.314: 91.4424% ( 41) 00:09:54.667 10317.314 - 10369.953: 91.6956% ( 35) 00:09:54.667 10369.953 - 10422.593: 91.9271% ( 32) 00:09:54.667 10422.593 - 10475.232: 92.1658% ( 33) 00:09:54.667 10475.232 - 10527.871: 92.3828% ( 30) 00:09:54.667 10527.871 - 10580.511: 92.5854% ( 28) 00:09:54.667 10580.511 - 10633.150: 92.8024% ( 30) 00:09:54.667 10633.150 - 10685.790: 92.9977% ( 27) 00:09:54.667 10685.790 - 10738.429: 93.1858% ( 26) 00:09:54.667 10738.429 - 10791.068: 93.3955% ( 29) 00:09:54.667 10791.068 - 10843.708: 93.5764% ( 25) 00:09:54.667 10843.708 - 10896.347: 93.7645% ( 26) 00:09:54.667 10896.347 - 10948.986: 93.9236% ( 22) 00:09:54.667 10948.986 - 11001.626: 94.0755% ( 21) 00:09:54.667 11001.626 - 11054.265: 94.1913% ( 16) 00:09:54.667 11054.265 - 11106.904: 94.3142% ( 17) 00:09:54.667 11106.904 - 11159.544: 94.4227% ( 15) 00:09:54.667 11159.544 - 11212.183: 94.5530% ( 18) 00:09:54.667 11212.183 - 11264.822: 94.6470% ( 13) 00:09:54.667 11264.822 - 11317.462: 94.7410% ( 13) 00:09:54.667 11317.462 - 11370.101: 94.8278% ( 12) 00:09:54.667 11370.101 - 11422.741: 94.8712% ( 6) 00:09:54.667 11422.741 - 11475.380: 94.9219% ( 7) 00:09:54.667 11475.380 - 11528.019: 94.9580% ( 5) 00:09:54.667 11528.019 - 11580.659: 95.0014% ( 6) 00:09:54.667 11580.659 - 11633.298: 95.0521% ( 7) 00:09:54.667 11633.298 - 11685.937: 95.1172% ( 9) 00:09:54.667 11685.937 - 11738.577: 95.1751% ( 8) 00:09:54.667 11738.577 - 11791.216: 95.2329% ( 8) 00:09:54.667 11791.216 - 11843.855: 95.3053% ( 10) 00:09:54.667 11843.855 - 11896.495: 95.3631% ( 8) 00:09:54.667 11896.495 - 11949.134: 95.4138% ( 7) 00:09:54.667 11949.134 - 12001.773: 95.4572% ( 6) 00:09:54.667 12001.773 - 12054.413: 95.5295% ( 10) 00:09:54.667 12054.413 - 12107.052: 95.5874% ( 8) 00:09:54.667 12107.052 - 12159.692: 95.6597% ( 10) 00:09:54.667 12159.692 - 12212.331: 95.7104% ( 7) 00:09:54.667 12212.331 - 12264.970: 95.7465% ( 5) 00:09:54.667 12264.970 - 12317.610: 95.7972% ( 7) 00:09:54.667 12317.610 - 12370.249: 95.8406% ( 6) 00:09:54.667 12370.249 - 12422.888: 95.8984% ( 8) 00:09:54.667 12422.888 - 12475.528: 95.9491% ( 7) 00:09:54.667 12475.528 - 12528.167: 95.9925% ( 6) 00:09:54.667 12528.167 - 12580.806: 96.0431% ( 7) 00:09:54.667 12580.806 - 12633.446: 96.0938% ( 7) 00:09:54.667 12633.446 - 12686.085: 96.1444% ( 7) 00:09:54.667 12686.085 - 12738.724: 96.1878% ( 6) 00:09:54.667 12738.724 - 12791.364: 96.2384% ( 7) 00:09:54.667 12791.364 - 12844.003: 96.2818% ( 6) 00:09:54.667 12844.003 - 12896.643: 96.3397% ( 8) 00:09:54.667 12896.643 - 12949.282: 96.4048% ( 9) 00:09:54.667 12949.282 - 13001.921: 96.4627% ( 8) 00:09:54.667 13001.921 - 13054.561: 96.5278% ( 9) 00:09:54.667 13054.561 - 13107.200: 96.6001% ( 10) 00:09:54.667 13107.200 - 13159.839: 96.6652% ( 9) 00:09:54.667 13159.839 - 13212.479: 96.7448% ( 11) 00:09:54.667 13212.479 - 13265.118: 96.8099% ( 9) 00:09:54.667 13265.118 - 13317.757: 96.8822% ( 10) 00:09:54.667 13317.757 - 13370.397: 96.9546% ( 10) 00:09:54.667 13370.397 - 13423.036: 97.0269% ( 10) 00:09:54.667 13423.036 - 13475.676: 97.1065% ( 11) 00:09:54.667 13475.676 - 13580.954: 97.2439% ( 19) 00:09:54.667 13580.954 - 13686.233: 97.3886% ( 20) 00:09:54.667 13686.233 - 13791.512: 97.4682% ( 11) 00:09:54.667 13791.512 - 13896.790: 97.5550% ( 12) 00:09:54.667 13896.790 - 14002.069: 97.6345% ( 11) 00:09:54.667 14002.069 - 14107.348: 97.7141% ( 11) 00:09:54.667 14107.348 - 14212.627: 97.7937% ( 11) 00:09:54.667 14212.627 - 14317.905: 97.8443% ( 7) 00:09:54.667 14317.905 - 14423.184: 97.9022% ( 8) 00:09:54.667 14423.184 - 14528.463: 97.9890% ( 12) 00:09:54.667 14528.463 - 14633.741: 98.0758% ( 12) 00:09:54.667 14633.741 - 14739.020: 98.1626% ( 12) 00:09:54.667 14739.020 - 14844.299: 98.2567% ( 13) 00:09:54.667 14844.299 - 14949.578: 98.3507% ( 13) 00:09:54.667 14949.578 - 15054.856: 98.4086% ( 8) 00:09:54.667 15054.856 - 15160.135: 98.4447% ( 5) 00:09:54.667 15160.135 - 15265.414: 98.4881% ( 6) 00:09:54.667 15265.414 - 15370.692: 98.5243% ( 5) 00:09:54.667 15370.692 - 15475.971: 98.5677% ( 6) 00:09:54.667 15475.971 - 15581.250: 98.6111% ( 6) 00:09:54.667 15581.250 - 15686.529: 98.6545% ( 6) 00:09:54.667 15686.529 - 15791.807: 98.6979% ( 6) 00:09:54.667 15791.807 - 15897.086: 98.7341% ( 5) 00:09:54.667 15897.086 - 16002.365: 98.7775% ( 6) 00:09:54.667 16002.365 - 16107.643: 98.8209% ( 6) 00:09:54.667 16107.643 - 16212.922: 98.8643% ( 6) 00:09:54.667 16212.922 - 16318.201: 98.9077% ( 6) 00:09:54.667 16318.201 - 16423.480: 98.9511% ( 6) 00:09:54.667 16423.480 - 16528.758: 98.9945% ( 6) 00:09:54.667 16528.758 - 16634.037: 99.0307% ( 5) 00:09:54.667 16634.037 - 16739.316: 99.0741% ( 6) 00:09:54.667 31162.500 - 31373.057: 99.1681% ( 13) 00:09:54.667 31373.057 - 31583.614: 99.2694% ( 14) 00:09:54.667 31583.614 - 31794.172: 99.3562% ( 12) 00:09:54.667 31794.172 - 32004.729: 99.4575% ( 14) 00:09:54.667 32004.729 - 32215.287: 99.5587% ( 14) 00:09:54.667 32215.287 - 32425.844: 99.6600% ( 14) 00:09:54.667 32425.844 - 32636.402: 99.7541% ( 13) 00:09:54.667 32636.402 - 32846.959: 99.8481% ( 13) 00:09:54.668 32846.959 - 33057.516: 99.9566% ( 15) 00:09:54.668 33057.516 - 33268.074: 100.0000% ( 6) 00:09:54.668 00:09:54.668 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:54.668 ============================================================================== 00:09:54.668 Range in us Cumulative IO count 00:09:54.668 4184.829 - 4211.149: 0.0143% ( 2) 00:09:54.668 4211.149 - 4237.468: 0.0430% ( 4) 00:09:54.668 4237.468 - 4263.788: 0.0573% ( 2) 00:09:54.668 4263.788 - 4290.108: 0.0717% ( 2) 00:09:54.668 4290.108 - 4316.427: 0.0860% ( 2) 00:09:54.668 4316.427 - 4342.747: 0.1075% ( 3) 00:09:54.668 4342.747 - 4369.067: 0.1218% ( 2) 00:09:54.668 4369.067 - 4395.386: 0.1433% ( 3) 00:09:54.668 4395.386 - 4421.706: 0.1649% ( 3) 00:09:54.668 4421.706 - 4448.026: 0.1792% ( 2) 00:09:54.668 4448.026 - 4474.345: 0.2007% ( 3) 00:09:54.668 4474.345 - 4500.665: 0.2150% ( 2) 00:09:54.668 4500.665 - 4526.985: 0.2365% ( 3) 00:09:54.668 4526.985 - 4553.304: 0.2509% ( 2) 00:09:54.668 4553.304 - 4579.624: 0.2652% ( 2) 00:09:54.668 4579.624 - 4605.944: 0.2795% ( 2) 00:09:54.668 4605.944 - 4632.263: 0.3010% ( 3) 00:09:54.668 4632.263 - 4658.583: 0.3154% ( 2) 00:09:54.668 4658.583 - 4684.903: 0.3297% ( 2) 00:09:54.668 4684.903 - 4711.222: 0.3512% ( 3) 00:09:54.668 4711.222 - 4737.542: 0.3655% ( 2) 00:09:54.668 4737.542 - 4763.862: 0.3799% ( 2) 00:09:54.668 4763.862 - 4790.182: 0.3942% ( 2) 00:09:54.668 4790.182 - 4816.501: 0.4014% ( 1) 00:09:54.668 4816.501 - 4842.821: 0.4157% ( 2) 00:09:54.668 4842.821 - 4869.141: 0.4300% ( 2) 00:09:54.668 4869.141 - 4895.460: 0.4444% ( 2) 00:09:54.668 4895.460 - 4921.780: 0.4587% ( 2) 00:09:54.668 4921.780 - 4948.100: 0.4731% ( 2) 00:09:54.668 4948.100 - 4974.419: 0.4946% ( 3) 00:09:54.668 4974.419 - 5000.739: 0.5089% ( 2) 00:09:54.668 5000.739 - 5027.059: 0.5232% ( 2) 00:09:54.668 5027.059 - 5053.378: 0.5376% ( 2) 00:09:54.668 5053.378 - 5079.698: 0.5591% ( 3) 00:09:54.668 5079.698 - 5106.018: 0.5734% ( 2) 00:09:54.668 5106.018 - 5132.337: 0.5877% ( 2) 00:09:54.668 5132.337 - 5158.657: 0.6092% ( 3) 00:09:54.668 5158.657 - 5184.977: 0.6236% ( 2) 00:09:54.668 5184.977 - 5211.296: 0.6379% ( 2) 00:09:54.668 5211.296 - 5237.616: 0.6522% ( 2) 00:09:54.668 5237.616 - 5263.936: 0.6737% ( 3) 00:09:54.668 5263.936 - 5290.255: 0.6881% ( 2) 00:09:54.668 5290.255 - 5316.575: 0.7024% ( 2) 00:09:54.668 5316.575 - 5342.895: 0.7239% ( 3) 00:09:54.668 5342.895 - 5369.214: 0.7382% ( 2) 00:09:54.668 5369.214 - 5395.534: 0.7597% ( 3) 00:09:54.668 5395.534 - 5421.854: 0.7741% ( 2) 00:09:54.668 5421.854 - 5448.173: 0.7956% ( 3) 00:09:54.668 5448.173 - 5474.493: 0.8099% ( 2) 00:09:54.668 5474.493 - 5500.813: 0.8243% ( 2) 00:09:54.668 5500.813 - 5527.133: 0.8458% ( 3) 00:09:54.668 5527.133 - 5553.452: 0.8601% ( 2) 00:09:54.668 5553.452 - 5579.772: 0.8673% ( 1) 00:09:54.668 5579.772 - 5606.092: 0.8816% ( 2) 00:09:54.668 5606.092 - 5632.411: 0.8959% ( 2) 00:09:54.668 5632.411 - 5658.731: 0.9103% ( 2) 00:09:54.668 5658.731 - 5685.051: 0.9174% ( 1) 00:09:54.668 7527.428 - 7580.067: 0.9389% ( 3) 00:09:54.668 7580.067 - 7632.707: 1.0034% ( 9) 00:09:54.668 7632.707 - 7685.346: 1.2901% ( 40) 00:09:54.668 7685.346 - 7737.986: 1.9997% ( 99) 00:09:54.668 7737.986 - 7790.625: 3.2110% ( 169) 00:09:54.668 7790.625 - 7843.264: 4.8022% ( 222) 00:09:54.668 7843.264 - 7895.904: 6.8162% ( 281) 00:09:54.668 7895.904 - 7948.543: 8.9019% ( 291) 00:09:54.668 7948.543 - 8001.182: 11.0307% ( 297) 00:09:54.668 8001.182 - 8053.822: 13.2167% ( 305) 00:09:54.668 8053.822 - 8106.461: 15.4028% ( 305) 00:09:54.668 8106.461 - 8159.100: 17.6032% ( 307) 00:09:54.668 8159.100 - 8211.740: 19.7391% ( 298) 00:09:54.668 8211.740 - 8264.379: 21.9968% ( 315) 00:09:54.668 8264.379 - 8317.018: 24.3119% ( 323) 00:09:54.668 8317.018 - 8369.658: 26.6915% ( 332) 00:09:54.668 8369.658 - 8422.297: 29.0281% ( 326) 00:09:54.668 8422.297 - 8474.937: 31.4292% ( 335) 00:09:54.668 8474.937 - 8527.576: 33.7801% ( 328) 00:09:54.668 8527.576 - 8580.215: 36.1239% ( 327) 00:09:54.668 8580.215 - 8632.855: 38.5894% ( 344) 00:09:54.668 8632.855 - 8685.494: 41.0622% ( 345) 00:09:54.668 8685.494 - 8738.133: 43.6425% ( 360) 00:09:54.668 8738.133 - 8790.773: 46.2443% ( 363) 00:09:54.668 8790.773 - 8843.412: 48.8747% ( 367) 00:09:54.668 8843.412 - 8896.051: 51.4622% ( 361) 00:09:54.668 8896.051 - 8948.691: 54.1141% ( 370) 00:09:54.668 8948.691 - 9001.330: 56.6944% ( 360) 00:09:54.668 9001.330 - 9053.969: 59.2317% ( 354) 00:09:54.668 9053.969 - 9106.609: 61.8406% ( 364) 00:09:54.668 9106.609 - 9159.248: 64.4997% ( 371) 00:09:54.668 9159.248 - 9211.888: 67.1373% ( 368) 00:09:54.668 9211.888 - 9264.527: 69.8108% ( 373) 00:09:54.668 9264.527 - 9317.166: 72.4627% ( 370) 00:09:54.668 9317.166 - 9369.806: 74.9928% ( 353) 00:09:54.668 9369.806 - 9422.445: 77.2219% ( 311) 00:09:54.668 9422.445 - 9475.084: 79.2718% ( 286) 00:09:54.668 9475.084 - 9527.724: 80.7841% ( 211) 00:09:54.668 9527.724 - 9580.363: 82.0456% ( 176) 00:09:54.668 9580.363 - 9633.002: 83.0132% ( 135) 00:09:54.668 9633.002 - 9685.642: 83.8088% ( 111) 00:09:54.668 9685.642 - 9738.281: 84.6044% ( 111) 00:09:54.668 9738.281 - 9790.920: 85.3641% ( 106) 00:09:54.668 9790.920 - 9843.560: 86.0235% ( 92) 00:09:54.668 9843.560 - 9896.199: 86.6614% ( 89) 00:09:54.668 9896.199 - 9948.839: 87.2205% ( 78) 00:09:54.668 9948.839 - 10001.478: 87.7652% ( 76) 00:09:54.668 10001.478 - 10054.117: 88.3314% ( 79) 00:09:54.668 10054.117 - 10106.757: 88.8116% ( 67) 00:09:54.668 10106.757 - 10159.396: 89.2345% ( 59) 00:09:54.668 10159.396 - 10212.035: 89.6646% ( 60) 00:09:54.668 10212.035 - 10264.675: 90.1018% ( 61) 00:09:54.668 10264.675 - 10317.314: 90.5318% ( 60) 00:09:54.668 10317.314 - 10369.953: 90.8902% ( 50) 00:09:54.668 10369.953 - 10422.593: 91.2916% ( 56) 00:09:54.668 10422.593 - 10475.232: 91.5998% ( 43) 00:09:54.668 10475.232 - 10527.871: 91.8650% ( 37) 00:09:54.668 10527.871 - 10580.511: 92.1087% ( 34) 00:09:54.668 10580.511 - 10633.150: 92.3595% ( 35) 00:09:54.668 10633.150 - 10685.790: 92.6319% ( 38) 00:09:54.668 10685.790 - 10738.429: 92.8541% ( 31) 00:09:54.668 10738.429 - 10791.068: 93.0763% ( 31) 00:09:54.668 10791.068 - 10843.708: 93.3128% ( 33) 00:09:54.668 10843.708 - 10896.347: 93.5063% ( 27) 00:09:54.668 10896.347 - 10948.986: 93.6927% ( 26) 00:09:54.668 10948.986 - 11001.626: 93.8360% ( 20) 00:09:54.668 11001.626 - 11054.265: 94.0080% ( 24) 00:09:54.668 11054.265 - 11106.904: 94.1514% ( 20) 00:09:54.668 11106.904 - 11159.544: 94.3162% ( 23) 00:09:54.668 11159.544 - 11212.183: 94.4452% ( 18) 00:09:54.668 11212.183 - 11264.822: 94.5886% ( 20) 00:09:54.668 11264.822 - 11317.462: 94.7319% ( 20) 00:09:54.668 11317.462 - 11370.101: 94.8896% ( 22) 00:09:54.668 11370.101 - 11422.741: 95.0330% ( 20) 00:09:54.668 11422.741 - 11475.380: 95.1476% ( 16) 00:09:54.668 11475.380 - 11528.019: 95.2408% ( 13) 00:09:54.668 11528.019 - 11580.659: 95.3268% ( 12) 00:09:54.668 11580.659 - 11633.298: 95.4272% ( 14) 00:09:54.668 11633.298 - 11685.937: 95.5132% ( 12) 00:09:54.668 11685.937 - 11738.577: 95.5634% ( 7) 00:09:54.668 11738.577 - 11791.216: 95.5992% ( 5) 00:09:54.668 11791.216 - 11843.855: 95.6350% ( 5) 00:09:54.668 11843.855 - 11896.495: 95.6709% ( 5) 00:09:54.668 11896.495 - 11949.134: 95.6852% ( 2) 00:09:54.668 11949.134 - 12001.773: 95.6995% ( 2) 00:09:54.668 12001.773 - 12054.413: 95.7210% ( 3) 00:09:54.668 12054.413 - 12107.052: 95.7354% ( 2) 00:09:54.668 12107.052 - 12159.692: 95.7569% ( 3) 00:09:54.668 12159.692 - 12212.331: 95.7784% ( 3) 00:09:54.668 12212.331 - 12264.970: 95.7999% ( 3) 00:09:54.668 12264.970 - 12317.610: 95.8142% ( 2) 00:09:54.668 12317.610 - 12370.249: 95.8357% ( 3) 00:09:54.668 12370.249 - 12422.888: 95.8501% ( 2) 00:09:54.668 12422.888 - 12475.528: 95.8716% ( 3) 00:09:54.668 12475.528 - 12528.167: 95.8931% ( 3) 00:09:54.668 12528.167 - 12580.806: 95.9074% ( 2) 00:09:54.668 12580.806 - 12633.446: 95.9217% ( 2) 00:09:54.668 12633.446 - 12686.085: 95.9432% ( 3) 00:09:54.668 12686.085 - 12738.724: 95.9576% ( 2) 00:09:54.668 12738.724 - 12791.364: 95.9791% ( 3) 00:09:54.668 12791.364 - 12844.003: 95.9934% ( 2) 00:09:54.668 12844.003 - 12896.643: 96.0364% ( 6) 00:09:54.668 12896.643 - 12949.282: 96.0794% ( 6) 00:09:54.668 12949.282 - 13001.921: 96.1153% ( 5) 00:09:54.668 13001.921 - 13054.561: 96.1583% ( 6) 00:09:54.668 13054.561 - 13107.200: 96.1941% ( 5) 00:09:54.668 13107.200 - 13159.839: 96.2371% ( 6) 00:09:54.668 13159.839 - 13212.479: 96.2729% ( 5) 00:09:54.668 13212.479 - 13265.118: 96.3159% ( 6) 00:09:54.668 13265.118 - 13317.757: 96.3589% ( 6) 00:09:54.668 13317.757 - 13370.397: 96.3948% ( 5) 00:09:54.668 13370.397 - 13423.036: 96.4593% ( 9) 00:09:54.668 13423.036 - 13475.676: 96.5095% ( 7) 00:09:54.668 13475.676 - 13580.954: 96.6385% ( 18) 00:09:54.668 13580.954 - 13686.233: 96.7747% ( 19) 00:09:54.668 13686.233 - 13791.512: 96.8822% ( 15) 00:09:54.668 13791.512 - 13896.790: 97.0112% ( 18) 00:09:54.668 13896.790 - 14002.069: 97.1545% ( 20) 00:09:54.668 14002.069 - 14107.348: 97.3050% ( 21) 00:09:54.668 14107.348 - 14212.627: 97.4269% ( 17) 00:09:54.668 14212.627 - 14317.905: 97.5774% ( 21) 00:09:54.668 14317.905 - 14423.184: 97.7208% ( 20) 00:09:54.668 14423.184 - 14528.463: 97.8641% ( 20) 00:09:54.668 14528.463 - 14633.741: 98.0075% ( 20) 00:09:54.668 14633.741 - 14739.020: 98.1508% ( 20) 00:09:54.668 14739.020 - 14844.299: 98.2942% ( 20) 00:09:54.669 14844.299 - 14949.578: 98.4160% ( 17) 00:09:54.669 14949.578 - 15054.856: 98.5665% ( 21) 00:09:54.669 15054.856 - 15160.135: 98.7170% ( 21) 00:09:54.669 15160.135 - 15265.414: 98.8389% ( 17) 00:09:54.669 15265.414 - 15370.692: 98.9464% ( 15) 00:09:54.669 15370.692 - 15475.971: 99.0396% ( 13) 00:09:54.669 15475.971 - 15581.250: 99.0826% ( 6) 00:09:54.669 20424.071 - 20529.349: 99.0897% ( 1) 00:09:54.669 20529.349 - 20634.628: 99.1327% ( 6) 00:09:54.669 20634.628 - 20739.907: 99.1757% ( 6) 00:09:54.669 20739.907 - 20845.186: 99.2259% ( 7) 00:09:54.669 20845.186 - 20950.464: 99.2761% ( 7) 00:09:54.669 20950.464 - 21055.743: 99.3263% ( 7) 00:09:54.669 21055.743 - 21161.022: 99.3764% ( 7) 00:09:54.669 21161.022 - 21266.300: 99.4338% ( 8) 00:09:54.669 21266.300 - 21371.579: 99.4768% ( 6) 00:09:54.669 21371.579 - 21476.858: 99.5269% ( 7) 00:09:54.669 21476.858 - 21582.137: 99.5700% ( 6) 00:09:54.669 21582.137 - 21687.415: 99.6201% ( 7) 00:09:54.669 21687.415 - 21792.694: 99.6703% ( 7) 00:09:54.669 21792.694 - 21897.973: 99.7205% ( 7) 00:09:54.669 21897.973 - 22003.251: 99.7706% ( 7) 00:09:54.669 22003.251 - 22108.530: 99.8208% ( 7) 00:09:54.669 22108.530 - 22213.809: 99.8710% ( 7) 00:09:54.669 22213.809 - 22319.088: 99.9212% ( 7) 00:09:54.669 22319.088 - 22424.366: 99.9713% ( 7) 00:09:54.669 22424.366 - 22529.645: 100.0000% ( 4) 00:09:54.669 00:09:54.669 17:56:11 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:56.044 Initializing NVMe Controllers 00:09:56.044 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:56.044 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:56.044 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:56.044 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:56.044 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:56.044 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:56.044 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:56.044 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:56.044 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:56.044 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:56.044 Initialization complete. Launching workers. 00:09:56.044 ======================================================== 00:09:56.044 Latency(us) 00:09:56.044 Device Information : IOPS MiB/s Average min max 00:09:56.044 PCIE (0000:00:09.0) NSID 1 from core 0: 15583.24 182.62 8211.62 5578.95 26990.75 00:09:56.044 PCIE (0000:00:06.0) NSID 1 from core 0: 15583.24 182.62 8207.23 5136.74 29663.80 00:09:56.044 PCIE (0000:00:07.0) NSID 1 from core 0: 15583.24 182.62 8202.71 5583.67 31175.56 00:09:56.044 PCIE (0000:00:08.0) NSID 1 from core 0: 15583.24 182.62 8198.57 5387.01 32132.92 00:09:56.044 PCIE (0000:00:08.0) NSID 2 from core 0: 15583.24 182.62 8194.43 4742.89 34838.27 00:09:56.044 PCIE (0000:00:08.0) NSID 3 from core 0: 15583.24 182.62 8189.96 4140.22 35849.17 00:09:56.044 ======================================================== 00:09:56.044 Total : 93499.46 1095.70 8200.75 4140.22 35849.17 00:09:56.044 00:09:56.044 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:56.044 ================================================================================= 00:09:56.044 1.00000% : 6027.206us 00:09:56.044 10.00000% : 6422.002us 00:09:56.044 25.00000% : 7211.592us 00:09:56.044 50.00000% : 8317.018us 00:09:56.044 75.00000% : 8738.133us 00:09:56.044 90.00000% : 9159.248us 00:09:56.044 95.00000% : 9527.724us 00:09:56.044 98.00000% : 11317.462us 00:09:56.044 99.00000% : 18213.218us 00:09:56.044 99.50000% : 24635.219us 00:09:56.044 99.90000% : 26530.236us 00:09:56.044 99.99000% : 26951.351us 00:09:56.044 99.99900% : 27161.908us 00:09:56.044 99.99990% : 27161.908us 00:09:56.044 99.99999% : 27161.908us 00:09:56.044 00:09:56.044 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:56.044 ================================================================================= 00:09:56.044 1.00000% : 5895.608us 00:09:56.044 10.00000% : 6369.362us 00:09:56.044 25.00000% : 7158.953us 00:09:56.044 50.00000% : 8211.740us 00:09:56.044 75.00000% : 8738.133us 00:09:56.044 90.00000% : 9264.527us 00:09:56.044 95.00000% : 9738.281us 00:09:56.044 98.00000% : 11159.544us 00:09:56.044 99.00000% : 18107.939us 00:09:56.044 99.50000% : 26951.351us 00:09:56.044 99.90000% : 29267.483us 00:09:56.044 99.99000% : 29688.598us 00:09:56.044 99.99900% : 29688.598us 00:09:56.044 99.99990% : 29688.598us 00:09:56.044 99.99999% : 29688.598us 00:09:56.044 00:09:56.044 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:56.044 ================================================================================= 00:09:56.044 1.00000% : 6000.887us 00:09:56.045 10.00000% : 6395.682us 00:09:56.045 25.00000% : 7158.953us 00:09:56.045 50.00000% : 8317.018us 00:09:56.045 75.00000% : 8738.133us 00:09:56.045 90.00000% : 9106.609us 00:09:56.045 95.00000% : 9422.445us 00:09:56.045 98.00000% : 10527.871us 00:09:56.045 99.00000% : 16107.643us 00:09:56.045 99.50000% : 28846.368us 00:09:56.045 99.90000% : 30741.385us 00:09:56.045 99.99000% : 31162.500us 00:09:56.045 99.99900% : 31373.057us 00:09:56.045 99.99990% : 31373.057us 00:09:56.045 99.99999% : 31373.057us 00:09:56.045 00:09:56.045 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:56.045 ================================================================================= 00:09:56.045 1.00000% : 5895.608us 00:09:56.045 10.00000% : 6422.002us 00:09:56.045 25.00000% : 7053.674us 00:09:56.045 50.00000% : 8264.379us 00:09:56.045 75.00000% : 8738.133us 00:09:56.045 90.00000% : 9211.888us 00:09:56.045 95.00000% : 9685.642us 00:09:56.045 98.00000% : 10422.593us 00:09:56.045 99.00000% : 14423.184us 00:09:56.045 99.50000% : 31373.057us 00:09:56.045 99.90000% : 32004.729us 00:09:56.045 99.99000% : 32215.287us 00:09:56.045 99.99900% : 32215.287us 00:09:56.045 99.99990% : 32215.287us 00:09:56.045 99.99999% : 32215.287us 00:09:56.045 00:09:56.045 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:56.045 ================================================================================= 00:09:56.045 1.00000% : 5790.329us 00:09:56.045 10.00000% : 6395.682us 00:09:56.045 25.00000% : 7158.953us 00:09:56.045 50.00000% : 8317.018us 00:09:56.045 75.00000% : 8738.133us 00:09:56.045 90.00000% : 9211.888us 00:09:56.045 95.00000% : 9527.724us 00:09:56.045 98.00000% : 10106.757us 00:09:56.045 99.00000% : 12528.167us 00:09:56.045 99.50000% : 33057.516us 00:09:56.045 99.90000% : 34531.418us 00:09:56.045 99.99000% : 34952.533us 00:09:56.045 99.99900% : 34952.533us 00:09:56.045 99.99990% : 34952.533us 00:09:56.045 99.99999% : 34952.533us 00:09:56.045 00:09:56.045 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:56.045 ================================================================================= 00:09:56.045 1.00000% : 5869.288us 00:09:56.045 10.00000% : 6369.362us 00:09:56.045 25.00000% : 7106.313us 00:09:56.045 50.00000% : 8317.018us 00:09:56.045 75.00000% : 8738.133us 00:09:56.045 90.00000% : 9106.609us 00:09:56.045 95.00000% : 9475.084us 00:09:56.045 98.00000% : 10317.314us 00:09:56.045 99.00000% : 12107.052us 00:09:56.045 99.50000% : 33899.746us 00:09:56.045 99.90000% : 35584.206us 00:09:56.045 99.99000% : 36005.320us 00:09:56.045 99.99900% : 36005.320us 00:09:56.045 99.99990% : 36005.320us 00:09:56.045 99.99999% : 36005.320us 00:09:56.045 00:09:56.045 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:56.045 ============================================================================== 00:09:56.045 Range in us Cumulative IO count 00:09:56.045 5553.452 - 5579.772: 0.0064% ( 1) 00:09:56.045 5579.772 - 5606.092: 0.0128% ( 1) 00:09:56.045 5606.092 - 5632.411: 0.0192% ( 1) 00:09:56.045 5632.411 - 5658.731: 0.0320% ( 2) 00:09:56.045 5658.731 - 5685.051: 0.0384% ( 1) 00:09:56.045 5711.370 - 5737.690: 0.0512% ( 2) 00:09:56.045 5737.690 - 5764.010: 0.0576% ( 1) 00:09:56.045 5764.010 - 5790.329: 0.0768% ( 3) 00:09:56.045 5790.329 - 5816.649: 0.1089% ( 5) 00:09:56.045 5816.649 - 5842.969: 0.1729% ( 10) 00:09:56.045 5842.969 - 5869.288: 0.2433% ( 11) 00:09:56.045 5869.288 - 5895.608: 0.3266% ( 13) 00:09:56.045 5895.608 - 5921.928: 0.4226% ( 15) 00:09:56.045 5921.928 - 5948.247: 0.5251% ( 16) 00:09:56.045 5948.247 - 5974.567: 0.6660% ( 22) 00:09:56.045 5974.567 - 6000.887: 0.8581% ( 30) 00:09:56.045 6000.887 - 6027.206: 1.3640% ( 79) 00:09:56.045 6027.206 - 6053.526: 1.6906% ( 51) 00:09:56.045 6053.526 - 6079.846: 1.9403% ( 39) 00:09:56.045 6079.846 - 6106.165: 2.3117% ( 58) 00:09:56.045 6106.165 - 6132.485: 2.6639% ( 55) 00:09:56.045 6132.485 - 6158.805: 3.2595% ( 93) 00:09:56.045 6158.805 - 6185.124: 4.0279% ( 120) 00:09:56.045 6185.124 - 6211.444: 4.5594% ( 83) 00:09:56.045 6211.444 - 6237.764: 5.4816% ( 144) 00:09:56.045 6237.764 - 6264.084: 6.2244% ( 116) 00:09:56.045 6264.084 - 6290.403: 6.7815% ( 87) 00:09:56.045 6290.403 - 6316.723: 7.3770% ( 93) 00:09:56.045 6316.723 - 6343.043: 8.2672% ( 139) 00:09:56.045 6343.043 - 6369.362: 9.0996% ( 130) 00:09:56.045 6369.362 - 6395.682: 9.8040% ( 110) 00:09:56.045 6395.682 - 6422.002: 10.8863% ( 169) 00:09:56.045 6422.002 - 6448.321: 11.8724% ( 154) 00:09:56.045 6448.321 - 6474.641: 12.9739% ( 172) 00:09:56.045 6474.641 - 6500.961: 14.0817% ( 173) 00:09:56.045 6500.961 - 6527.280: 14.8309% ( 117) 00:09:56.045 6527.280 - 6553.600: 15.4969% ( 104) 00:09:56.045 6553.600 - 6579.920: 16.6944% ( 187) 00:09:56.045 6579.920 - 6606.239: 17.6550% ( 150) 00:09:56.045 6606.239 - 6632.559: 18.3850% ( 114) 00:09:56.045 6632.559 - 6658.879: 19.1855% ( 125) 00:09:56.045 6658.879 - 6685.198: 19.7170% ( 83) 00:09:56.045 6685.198 - 6711.518: 20.2485% ( 83) 00:09:56.045 6711.518 - 6737.838: 21.0873% ( 131) 00:09:56.045 6737.838 - 6790.477: 21.7853% ( 109) 00:09:56.045 6790.477 - 6843.116: 22.6947% ( 142) 00:09:56.045 6843.116 - 6895.756: 23.2198% ( 82) 00:09:56.045 6895.756 - 6948.395: 23.7001% ( 75) 00:09:56.045 6948.395 - 7001.035: 23.9562% ( 40) 00:09:56.045 7001.035 - 7053.674: 24.2380% ( 44) 00:09:56.045 7053.674 - 7106.313: 24.4941% ( 40) 00:09:56.045 7106.313 - 7158.953: 24.7374% ( 38) 00:09:56.045 7158.953 - 7211.592: 25.1281% ( 61) 00:09:56.045 7211.592 - 7264.231: 25.5699% ( 69) 00:09:56.045 7264.231 - 7316.871: 26.0310% ( 72) 00:09:56.045 7316.871 - 7369.510: 26.5625% ( 83) 00:09:56.045 7369.510 - 7422.149: 27.2733% ( 111) 00:09:56.045 7422.149 - 7474.789: 28.0546% ( 122) 00:09:56.045 7474.789 - 7527.428: 29.0856% ( 161) 00:09:56.045 7527.428 - 7580.067: 29.8732% ( 123) 00:09:56.045 7580.067 - 7632.707: 30.7441% ( 136) 00:09:56.045 7632.707 - 7685.346: 31.5318% ( 123) 00:09:56.045 7685.346 - 7737.986: 32.3130% ( 122) 00:09:56.045 7737.986 - 7790.625: 33.2223% ( 142) 00:09:56.045 7790.625 - 7843.264: 34.1381% ( 143) 00:09:56.045 7843.264 - 7895.904: 35.2203% ( 169) 00:09:56.045 7895.904 - 7948.543: 36.4690% ( 195) 00:09:56.045 7948.543 - 8001.182: 38.0443% ( 246) 00:09:56.045 8001.182 - 8053.822: 39.8309% ( 279) 00:09:56.045 8053.822 - 8106.461: 41.7392% ( 298) 00:09:56.045 8106.461 - 8159.100: 43.7180% ( 309) 00:09:56.045 8159.100 - 8211.740: 45.9273% ( 345) 00:09:56.045 8211.740 - 8264.379: 48.6040% ( 418) 00:09:56.045 8264.379 - 8317.018: 51.4536% ( 445) 00:09:56.045 8317.018 - 8369.658: 54.7387% ( 513) 00:09:56.045 8369.658 - 8422.297: 57.7228% ( 466) 00:09:56.045 8422.297 - 8474.937: 60.6429% ( 456) 00:09:56.045 8474.937 - 8527.576: 63.6014% ( 462) 00:09:56.045 8527.576 - 8580.215: 66.9378% ( 521) 00:09:56.045 8580.215 - 8632.855: 69.8835% ( 460) 00:09:56.045 8632.855 - 8685.494: 72.6434% ( 431) 00:09:56.045 8685.494 - 8738.133: 75.5891% ( 460) 00:09:56.045 8738.133 - 8790.773: 78.3811% ( 436) 00:09:56.045 8790.773 - 8843.412: 80.8914% ( 392) 00:09:56.045 8843.412 - 8896.051: 83.2928% ( 375) 00:09:56.045 8896.051 - 8948.691: 85.0858% ( 280) 00:09:56.045 8948.691 - 9001.330: 87.0453% ( 306) 00:09:56.045 9001.330 - 9053.969: 88.6270% ( 247) 00:09:56.045 9053.969 - 9106.609: 89.9782% ( 211) 00:09:56.045 9106.609 - 9159.248: 91.1949% ( 190) 00:09:56.045 9159.248 - 9211.888: 92.1747% ( 153) 00:09:56.045 9211.888 - 9264.527: 93.0648% ( 139) 00:09:56.045 9264.527 - 9317.166: 93.7756% ( 111) 00:09:56.045 9317.166 - 9369.806: 94.2687% ( 77) 00:09:56.045 9369.806 - 9422.445: 94.6913% ( 66) 00:09:56.045 9422.445 - 9475.084: 94.9923% ( 47) 00:09:56.045 9475.084 - 9527.724: 95.2485% ( 40) 00:09:56.045 9527.724 - 9580.363: 95.4726% ( 35) 00:09:56.045 9580.363 - 9633.002: 95.6647% ( 30) 00:09:56.045 9633.002 - 9685.642: 95.8120% ( 23) 00:09:56.045 9685.642 - 9738.281: 95.9593% ( 23) 00:09:56.045 9738.281 - 9790.920: 96.0617% ( 16) 00:09:56.045 9790.920 - 9843.560: 96.1450% ( 13) 00:09:56.045 9843.560 - 9896.199: 96.2218% ( 12) 00:09:56.045 9896.199 - 9948.839: 96.2987% ( 12) 00:09:56.045 9948.839 - 10001.478: 96.4203% ( 19) 00:09:56.045 10001.478 - 10054.117: 96.5228% ( 16) 00:09:56.045 10054.117 - 10106.757: 96.6060% ( 13) 00:09:56.045 10106.757 - 10159.396: 96.6765% ( 11) 00:09:56.045 10159.396 - 10212.035: 96.7725% ( 15) 00:09:56.045 10212.035 - 10264.675: 96.8302% ( 9) 00:09:56.045 10264.675 - 10317.314: 96.9134% ( 13) 00:09:56.045 10317.314 - 10369.953: 96.9711% ( 9) 00:09:56.045 10369.953 - 10422.593: 97.0159% ( 7) 00:09:56.045 10422.593 - 10475.232: 97.0607% ( 7) 00:09:56.045 10475.232 - 10527.871: 97.1055% ( 7) 00:09:56.045 10527.871 - 10580.511: 97.1504% ( 7) 00:09:56.045 10580.511 - 10633.150: 97.1888% ( 6) 00:09:56.045 10633.150 - 10685.790: 97.2272% ( 6) 00:09:56.045 10685.790 - 10738.429: 97.2656% ( 6) 00:09:56.045 10738.429 - 10791.068: 97.2976% ( 5) 00:09:56.045 10791.068 - 10843.708: 97.3105% ( 2) 00:09:56.045 10843.708 - 10896.347: 97.3297% ( 3) 00:09:56.045 10896.347 - 10948.986: 97.3425% ( 2) 00:09:56.045 10948.986 - 11001.626: 97.3745% ( 5) 00:09:56.045 11001.626 - 11054.265: 97.4129% ( 6) 00:09:56.045 11054.265 - 11106.904: 97.4898% ( 12) 00:09:56.045 11106.904 - 11159.544: 97.6434% ( 24) 00:09:56.045 11159.544 - 11212.183: 97.7651% ( 19) 00:09:56.045 11212.183 - 11264.822: 97.8356% ( 11) 00:09:56.045 11264.822 - 11317.462: 98.0020% ( 26) 00:09:56.045 11317.462 - 11370.101: 98.0597% ( 9) 00:09:56.045 11370.101 - 11422.741: 98.1685% ( 17) 00:09:56.045 11422.741 - 11475.380: 98.1814% ( 2) 00:09:56.045 11475.380 - 11528.019: 98.1942% ( 2) 00:09:56.045 11528.019 - 11580.659: 98.2006% ( 1) 00:09:56.045 11580.659 - 11633.298: 98.2134% ( 2) 00:09:56.045 11633.298 - 11685.937: 98.2198% ( 1) 00:09:56.045 11685.937 - 11738.577: 98.2326% ( 2) 00:09:56.045 11738.577 - 11791.216: 98.2390% ( 1) 00:09:56.045 11791.216 - 11843.855: 98.2518% ( 2) 00:09:56.045 11843.855 - 11896.495: 98.2582% ( 1) 00:09:56.045 11896.495 - 11949.134: 98.2646% ( 1) 00:09:56.045 11949.134 - 12001.773: 98.2774% ( 2) 00:09:56.045 12001.773 - 12054.413: 98.2902% ( 2) 00:09:56.045 12054.413 - 12107.052: 98.3030% ( 2) 00:09:56.045 12107.052 - 12159.692: 98.3094% ( 1) 00:09:56.045 12159.692 - 12212.331: 98.3222% ( 2) 00:09:56.045 12212.331 - 12264.970: 98.3286% ( 1) 00:09:56.045 12264.970 - 12317.610: 98.3414% ( 2) 00:09:56.045 12317.610 - 12370.249: 98.3478% ( 1) 00:09:56.045 12370.249 - 12422.888: 98.3607% ( 2) 00:09:56.045 15475.971 - 15581.250: 98.3671% ( 1) 00:09:56.045 16107.643 - 16212.922: 98.3735% ( 1) 00:09:56.045 16212.922 - 16318.201: 98.4055% ( 5) 00:09:56.045 16318.201 - 16423.480: 98.4439% ( 6) 00:09:56.045 16423.480 - 16528.758: 98.4951% ( 8) 00:09:56.045 16528.758 - 16634.037: 98.5272% ( 5) 00:09:56.045 16634.037 - 16739.316: 98.5592% ( 5) 00:09:56.045 16739.316 - 16844.594: 98.6040% ( 7) 00:09:56.045 16844.594 - 16949.873: 98.6424% ( 6) 00:09:56.045 16949.873 - 17055.152: 98.7513% ( 17) 00:09:56.045 17055.152 - 17160.431: 98.8409% ( 14) 00:09:56.045 17160.431 - 17265.709: 98.8665% ( 4) 00:09:56.045 17265.709 - 17370.988: 98.8922% ( 4) 00:09:56.045 17370.988 - 17476.267: 98.9050% ( 2) 00:09:56.045 17476.267 - 17581.545: 98.9242% ( 3) 00:09:56.045 17581.545 - 17686.824: 98.9370% ( 2) 00:09:56.045 17686.824 - 17792.103: 98.9498% ( 2) 00:09:56.045 17792.103 - 17897.382: 98.9690% ( 3) 00:09:56.045 17897.382 - 18002.660: 98.9754% ( 1) 00:09:56.045 18002.660 - 18107.939: 98.9882% ( 2) 00:09:56.045 18107.939 - 18213.218: 99.0010% ( 2) 00:09:56.045 18213.218 - 18318.496: 99.0138% ( 2) 00:09:56.045 18318.496 - 18423.775: 99.0330% ( 3) 00:09:56.045 18423.775 - 18529.054: 99.0459% ( 2) 00:09:56.045 18529.054 - 18634.333: 99.0587% ( 2) 00:09:56.045 18634.333 - 18739.611: 99.0715% ( 2) 00:09:56.045 18739.611 - 18844.890: 99.0907% ( 3) 00:09:56.045 18844.890 - 18950.169: 99.1035% ( 2) 00:09:56.045 18950.169 - 19055.447: 99.1163% ( 2) 00:09:56.045 19055.447 - 19160.726: 99.1355% ( 3) 00:09:56.045 19160.726 - 19266.005: 99.1419% ( 1) 00:09:56.045 19266.005 - 19371.284: 99.1611% ( 3) 00:09:56.045 19371.284 - 19476.562: 99.1739% ( 2) 00:09:56.045 19476.562 - 19581.841: 99.1803% ( 1) 00:09:56.045 23056.039 - 23161.317: 99.1931% ( 2) 00:09:56.045 23161.317 - 23266.596: 99.2123% ( 3) 00:09:56.045 23266.596 - 23371.875: 99.2380% ( 4) 00:09:56.045 23371.875 - 23477.153: 99.2636% ( 4) 00:09:56.045 23477.153 - 23582.432: 99.2892% ( 4) 00:09:56.045 23582.432 - 23687.711: 99.3084% ( 3) 00:09:56.045 23687.711 - 23792.990: 99.3340% ( 4) 00:09:56.045 23792.990 - 23898.268: 99.3596% ( 4) 00:09:56.045 23898.268 - 24003.547: 99.3788% ( 3) 00:09:56.045 24003.547 - 24108.826: 99.3916% ( 2) 00:09:56.045 24108.826 - 24214.104: 99.4109% ( 3) 00:09:56.045 24214.104 - 24319.383: 99.4365% ( 4) 00:09:56.045 24319.383 - 24424.662: 99.4621% ( 4) 00:09:56.045 24424.662 - 24529.941: 99.4877% ( 4) 00:09:56.045 24529.941 - 24635.219: 99.5133% ( 4) 00:09:56.045 24635.219 - 24740.498: 99.5389% ( 4) 00:09:56.045 24740.498 - 24845.777: 99.5645% ( 4) 00:09:56.045 24845.777 - 24951.055: 99.5838% ( 3) 00:09:56.045 24951.055 - 25056.334: 99.6030% ( 3) 00:09:56.045 25056.334 - 25161.613: 99.6286% ( 4) 00:09:56.045 25161.613 - 25266.892: 99.6542% ( 4) 00:09:56.045 25266.892 - 25372.170: 99.6798% ( 4) 00:09:56.045 25372.170 - 25477.449: 99.6990% ( 3) 00:09:56.045 25477.449 - 25582.728: 99.7182% ( 3) 00:09:56.045 25582.728 - 25688.006: 99.7374% ( 3) 00:09:56.045 25688.006 - 25793.285: 99.7631% ( 4) 00:09:56.045 25793.285 - 25898.564: 99.7823% ( 3) 00:09:56.045 25898.564 - 26003.843: 99.8079% ( 4) 00:09:56.045 26003.843 - 26109.121: 99.8271% ( 3) 00:09:56.045 26109.121 - 26214.400: 99.8527% ( 4) 00:09:56.045 26214.400 - 26319.679: 99.8719% ( 3) 00:09:56.045 26319.679 - 26424.957: 99.8911% ( 3) 00:09:56.045 26424.957 - 26530.236: 99.9103% ( 3) 00:09:56.045 26530.236 - 26635.515: 99.9296% ( 3) 00:09:56.045 26635.515 - 26740.794: 99.9552% ( 4) 00:09:56.045 26740.794 - 26846.072: 99.9744% ( 3) 00:09:56.045 26846.072 - 26951.351: 99.9936% ( 3) 00:09:56.045 26951.351 - 27161.908: 100.0000% ( 1) 00:09:56.045 00:09:56.045 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:56.045 ============================================================================== 00:09:56.045 Range in us Cumulative IO count 00:09:56.045 5132.337 - 5158.657: 0.0064% ( 1) 00:09:56.045 5316.575 - 5342.895: 0.0128% ( 1) 00:09:56.045 5421.854 - 5448.173: 0.0320% ( 3) 00:09:56.045 5448.173 - 5474.493: 0.0448% ( 2) 00:09:56.045 5500.813 - 5527.133: 0.0512% ( 1) 00:09:56.045 5527.133 - 5553.452: 0.0832% ( 5) 00:09:56.045 5553.452 - 5579.772: 0.1025% ( 3) 00:09:56.045 5579.772 - 5606.092: 0.1089% ( 1) 00:09:56.045 5606.092 - 5632.411: 0.1153% ( 1) 00:09:56.045 5632.411 - 5658.731: 0.1281% ( 2) 00:09:56.045 5658.731 - 5685.051: 0.1473% ( 3) 00:09:56.045 5685.051 - 5711.370: 0.1729% ( 4) 00:09:56.045 5711.370 - 5737.690: 0.2049% ( 5) 00:09:56.045 5737.690 - 5764.010: 0.3202% ( 18) 00:09:56.045 5764.010 - 5790.329: 0.3970% ( 12) 00:09:56.045 5790.329 - 5816.649: 0.5635% ( 26) 00:09:56.045 5816.649 - 5842.969: 0.6916% ( 20) 00:09:56.045 5842.969 - 5869.288: 0.8645% ( 27) 00:09:56.045 5869.288 - 5895.608: 1.0054% ( 22) 00:09:56.045 5895.608 - 5921.928: 1.2423% ( 37) 00:09:56.045 5921.928 - 5948.247: 1.5241% ( 44) 00:09:56.045 5948.247 - 5974.567: 1.9083% ( 60) 00:09:56.045 5974.567 - 6000.887: 2.2029% ( 46) 00:09:56.045 6000.887 - 6027.206: 2.4654% ( 41) 00:09:56.045 6027.206 - 6053.526: 2.7152% ( 39) 00:09:56.045 6053.526 - 6079.846: 3.0418% ( 51) 00:09:56.045 6079.846 - 6106.165: 3.5733% ( 83) 00:09:56.045 6106.165 - 6132.485: 4.0663% ( 77) 00:09:56.045 6132.485 - 6158.805: 4.6683% ( 94) 00:09:56.045 6158.805 - 6185.124: 5.5648% ( 140) 00:09:56.045 6185.124 - 6211.444: 6.2244% ( 103) 00:09:56.045 6211.444 - 6237.764: 6.9928% ( 120) 00:09:56.045 6237.764 - 6264.084: 7.8509% ( 134) 00:09:56.045 6264.084 - 6290.403: 8.6258% ( 121) 00:09:56.045 6290.403 - 6316.723: 9.2533% ( 98) 00:09:56.045 6316.723 - 6343.043: 9.8809% ( 98) 00:09:56.045 6343.043 - 6369.362: 10.7902% ( 142) 00:09:56.045 6369.362 - 6395.682: 11.5843% ( 124) 00:09:56.045 6395.682 - 6422.002: 12.3079% ( 113) 00:09:56.045 6422.002 - 6448.321: 13.0507% ( 116) 00:09:56.045 6448.321 - 6474.641: 13.7103% ( 103) 00:09:56.045 6474.641 - 6500.961: 14.3571% ( 101) 00:09:56.045 6500.961 - 6527.280: 15.1383% ( 122) 00:09:56.045 6527.280 - 6553.600: 15.8940% ( 118) 00:09:56.045 6553.600 - 6579.920: 16.5151% ( 97) 00:09:56.045 6579.920 - 6606.239: 17.0338% ( 81) 00:09:56.045 6606.239 - 6632.559: 17.7959% ( 119) 00:09:56.045 6632.559 - 6658.879: 18.4618% ( 104) 00:09:56.045 6658.879 - 6685.198: 19.1086% ( 101) 00:09:56.045 6685.198 - 6711.518: 19.7234% ( 96) 00:09:56.045 6711.518 - 6737.838: 20.2228% ( 78) 00:09:56.045 6737.838 - 6790.477: 20.9721% ( 117) 00:09:56.045 6790.477 - 6843.116: 21.5100% ( 84) 00:09:56.045 6843.116 - 6895.756: 22.1824% ( 105) 00:09:56.045 6895.756 - 6948.395: 22.9124% ( 114) 00:09:56.046 6948.395 - 7001.035: 23.5976% ( 107) 00:09:56.046 7001.035 - 7053.674: 24.2380% ( 100) 00:09:56.046 7053.674 - 7106.313: 24.8527% ( 96) 00:09:56.046 7106.313 - 7158.953: 25.2882% ( 68) 00:09:56.046 7158.953 - 7211.592: 25.9413% ( 102) 00:09:56.046 7211.592 - 7264.231: 26.4536% ( 80) 00:09:56.046 7264.231 - 7316.871: 26.9339% ( 75) 00:09:56.046 7316.871 - 7369.510: 27.4846% ( 86) 00:09:56.046 7369.510 - 7422.149: 27.9137% ( 67) 00:09:56.046 7422.149 - 7474.789: 28.4324% ( 81) 00:09:56.046 7474.789 - 7527.428: 29.3225% ( 139) 00:09:56.046 7527.428 - 7580.067: 30.1614% ( 131) 00:09:56.046 7580.067 - 7632.707: 31.2500% ( 170) 00:09:56.046 7632.707 - 7685.346: 32.1273% ( 137) 00:09:56.046 7685.346 - 7737.986: 33.2544% ( 176) 00:09:56.046 7737.986 - 7790.625: 34.3430% ( 170) 00:09:56.046 7790.625 - 7843.264: 35.3932% ( 164) 00:09:56.046 7843.264 - 7895.904: 37.1094% ( 268) 00:09:56.046 7895.904 - 7948.543: 39.2162% ( 329) 00:09:56.046 7948.543 - 8001.182: 41.1949% ( 309) 00:09:56.046 8001.182 - 8053.822: 43.1160% ( 300) 00:09:56.046 8053.822 - 8106.461: 45.9209% ( 438) 00:09:56.046 8106.461 - 8159.100: 47.9124% ( 311) 00:09:56.046 8159.100 - 8211.740: 50.1473% ( 349) 00:09:56.046 8211.740 - 8264.379: 52.9777% ( 442) 00:09:56.046 8264.379 - 8317.018: 55.8530% ( 449) 00:09:56.046 8317.018 - 8369.658: 58.8307% ( 465) 00:09:56.046 8369.658 - 8422.297: 61.6035% ( 433) 00:09:56.046 8422.297 - 8474.937: 63.9216% ( 362) 00:09:56.046 8474.937 - 8527.576: 66.1181% ( 343) 00:09:56.046 8527.576 - 8580.215: 68.5643% ( 382) 00:09:56.046 8580.215 - 8632.855: 70.8376% ( 355) 00:09:56.046 8632.855 - 8685.494: 73.1685% ( 364) 00:09:56.046 8685.494 - 8738.133: 75.0704% ( 297) 00:09:56.046 8738.133 - 8790.773: 76.5369% ( 229) 00:09:56.046 8790.773 - 8843.412: 78.0866% ( 242) 00:09:56.046 8843.412 - 8896.051: 79.7131% ( 254) 00:09:56.046 8896.051 - 8948.691: 81.6919% ( 309) 00:09:56.046 8948.691 - 9001.330: 83.7026% ( 314) 00:09:56.046 9001.330 - 9053.969: 85.5213% ( 284) 00:09:56.046 9053.969 - 9106.609: 87.0069% ( 232) 00:09:56.046 9106.609 - 9159.248: 88.4734% ( 229) 00:09:56.046 9159.248 - 9211.888: 89.2098% ( 115) 00:09:56.046 9211.888 - 9264.527: 90.0038% ( 124) 00:09:56.046 9264.527 - 9317.166: 91.0605% ( 165) 00:09:56.046 9317.166 - 9369.806: 91.8929% ( 130) 00:09:56.046 9369.806 - 9422.445: 93.0840% ( 186) 00:09:56.046 9422.445 - 9475.084: 93.4426% ( 56) 00:09:56.046 9475.084 - 9527.724: 93.8525% ( 64) 00:09:56.046 9527.724 - 9580.363: 94.2303% ( 59) 00:09:56.046 9580.363 - 9633.002: 94.5056% ( 43) 00:09:56.046 9633.002 - 9685.642: 94.8130% ( 48) 00:09:56.046 9685.642 - 9738.281: 95.1140% ( 47) 00:09:56.046 9738.281 - 9790.920: 95.3701% ( 40) 00:09:56.046 9790.920 - 9843.560: 95.6071% ( 37) 00:09:56.046 9843.560 - 9896.199: 95.8312% ( 35) 00:09:56.046 9896.199 - 9948.839: 96.0361% ( 32) 00:09:56.046 9948.839 - 10001.478: 96.2090% ( 27) 00:09:56.046 10001.478 - 10054.117: 96.3179% ( 17) 00:09:56.046 10054.117 - 10106.757: 96.4203% ( 16) 00:09:56.046 10106.757 - 10159.396: 96.5100% ( 14) 00:09:56.046 10159.396 - 10212.035: 96.6060% ( 15) 00:09:56.046 10212.035 - 10264.675: 96.7213% ( 18) 00:09:56.046 10264.675 - 10317.314: 96.7982% ( 12) 00:09:56.046 10317.314 - 10369.953: 96.8878% ( 14) 00:09:56.046 10369.953 - 10422.593: 96.9582% ( 11) 00:09:56.046 10422.593 - 10475.232: 97.0479% ( 14) 00:09:56.046 10475.232 - 10527.871: 97.1824% ( 21) 00:09:56.046 10527.871 - 10580.511: 97.3233% ( 22) 00:09:56.046 10580.511 - 10633.150: 97.4257% ( 16) 00:09:56.046 10633.150 - 10685.790: 97.4962% ( 11) 00:09:56.046 10685.790 - 10738.429: 97.5794% ( 13) 00:09:56.046 10738.429 - 10791.068: 97.6434% ( 10) 00:09:56.046 10791.068 - 10843.708: 97.6883% ( 7) 00:09:56.046 10843.708 - 10896.347: 97.7651% ( 12) 00:09:56.046 10896.347 - 10948.986: 97.8291% ( 10) 00:09:56.046 10948.986 - 11001.626: 97.8996% ( 11) 00:09:56.046 11001.626 - 11054.265: 97.9508% ( 8) 00:09:56.046 11054.265 - 11106.904: 97.9956% ( 7) 00:09:56.046 11106.904 - 11159.544: 98.0469% ( 8) 00:09:56.046 11159.544 - 11212.183: 98.1173% ( 11) 00:09:56.046 11212.183 - 11264.822: 98.1429% ( 4) 00:09:56.046 11264.822 - 11317.462: 98.1493% ( 1) 00:09:56.046 11317.462 - 11370.101: 98.1621% ( 2) 00:09:56.046 11370.101 - 11422.741: 98.1685% ( 1) 00:09:56.046 11422.741 - 11475.380: 98.1814% ( 2) 00:09:56.046 11475.380 - 11528.019: 98.1878% ( 1) 00:09:56.046 11580.659 - 11633.298: 98.2006% ( 2) 00:09:56.046 11633.298 - 11685.937: 98.2070% ( 1) 00:09:56.046 11685.937 - 11738.577: 98.2262% ( 3) 00:09:56.046 11791.216 - 11843.855: 98.2326% ( 1) 00:09:56.046 11843.855 - 11896.495: 98.2454% ( 2) 00:09:56.046 11896.495 - 11949.134: 98.2582% ( 2) 00:09:56.046 11949.134 - 12001.773: 98.2710% ( 2) 00:09:56.046 12001.773 - 12054.413: 98.2774% ( 1) 00:09:56.046 12054.413 - 12107.052: 98.2838% ( 1) 00:09:56.046 12107.052 - 12159.692: 98.2966% ( 2) 00:09:56.046 12159.692 - 12212.331: 98.3222% ( 4) 00:09:56.046 12212.331 - 12264.970: 98.3350% ( 2) 00:09:56.046 12264.970 - 12317.610: 98.3607% ( 4) 00:09:56.046 14317.905 - 14423.184: 98.3671% ( 1) 00:09:56.046 14423.184 - 14528.463: 98.3927% ( 4) 00:09:56.046 14528.463 - 14633.741: 98.4055% ( 2) 00:09:56.046 14633.741 - 14739.020: 98.4247% ( 3) 00:09:56.046 14739.020 - 14844.299: 98.4503% ( 4) 00:09:56.046 14844.299 - 14949.578: 98.4759% ( 4) 00:09:56.046 14949.578 - 15054.856: 98.4951% ( 3) 00:09:56.046 15054.856 - 15160.135: 98.5400% ( 7) 00:09:56.046 15160.135 - 15265.414: 98.5720% ( 5) 00:09:56.046 15265.414 - 15370.692: 98.6168% ( 7) 00:09:56.046 15370.692 - 15475.971: 98.6744% ( 9) 00:09:56.046 15475.971 - 15581.250: 98.6936% ( 3) 00:09:56.046 15581.250 - 15686.529: 98.7193% ( 4) 00:09:56.046 15686.529 - 15791.807: 98.7321% ( 2) 00:09:56.046 15791.807 - 15897.086: 98.7513% ( 3) 00:09:56.046 15897.086 - 16002.365: 98.7769% ( 4) 00:09:56.046 16002.365 - 16107.643: 98.7961% ( 3) 00:09:56.046 16107.643 - 16212.922: 98.8153% ( 3) 00:09:56.046 16212.922 - 16318.201: 98.8281% ( 2) 00:09:56.046 16318.201 - 16423.480: 98.8473% ( 3) 00:09:56.046 16423.480 - 16528.758: 98.8730% ( 4) 00:09:56.046 16528.758 - 16634.037: 98.8858% ( 2) 00:09:56.046 16634.037 - 16739.316: 98.9114% ( 4) 00:09:56.046 16739.316 - 16844.594: 98.9178% ( 1) 00:09:56.046 16844.594 - 16949.873: 98.9370% ( 3) 00:09:56.046 16949.873 - 17055.152: 98.9434% ( 1) 00:09:56.046 17055.152 - 17160.431: 98.9562% ( 2) 00:09:56.046 17265.709 - 17370.988: 98.9626% ( 1) 00:09:56.046 17370.988 - 17476.267: 98.9690% ( 1) 00:09:56.046 17897.382 - 18002.660: 98.9754% ( 1) 00:09:56.046 18002.660 - 18107.939: 99.0394% ( 10) 00:09:56.046 18107.939 - 18213.218: 99.1483% ( 17) 00:09:56.046 18213.218 - 18318.496: 99.1803% ( 5) 00:09:56.046 25161.613 - 25266.892: 99.2059% ( 4) 00:09:56.046 25266.892 - 25372.170: 99.2188% ( 2) 00:09:56.046 25372.170 - 25477.449: 99.2444% ( 4) 00:09:56.046 25477.449 - 25582.728: 99.2636% ( 3) 00:09:56.046 25582.728 - 25688.006: 99.2764% ( 2) 00:09:56.046 25688.006 - 25793.285: 99.2956% ( 3) 00:09:56.046 25793.285 - 25898.564: 99.3148% ( 3) 00:09:56.046 25898.564 - 26003.843: 99.3404% ( 4) 00:09:56.046 26003.843 - 26109.121: 99.3596% ( 3) 00:09:56.046 26109.121 - 26214.400: 99.3788% ( 3) 00:09:56.046 26214.400 - 26319.679: 99.3981% ( 3) 00:09:56.046 26319.679 - 26424.957: 99.4173% ( 3) 00:09:56.046 26424.957 - 26530.236: 99.4365% ( 3) 00:09:56.046 26530.236 - 26635.515: 99.4557% ( 3) 00:09:56.046 26635.515 - 26740.794: 99.4749% ( 3) 00:09:56.046 26740.794 - 26846.072: 99.4941% ( 3) 00:09:56.046 26846.072 - 26951.351: 99.5069% ( 2) 00:09:56.046 26951.351 - 27161.908: 99.5453% ( 6) 00:09:56.046 27161.908 - 27372.466: 99.5838% ( 6) 00:09:56.046 27372.466 - 27583.023: 99.6222% ( 6) 00:09:56.046 27583.023 - 27793.581: 99.6542% ( 5) 00:09:56.046 27793.581 - 28004.138: 99.6926% ( 6) 00:09:56.046 28004.138 - 28214.696: 99.7246% ( 5) 00:09:56.046 28214.696 - 28425.253: 99.7631% ( 6) 00:09:56.046 28425.253 - 28635.810: 99.8079% ( 7) 00:09:56.046 28635.810 - 28846.368: 99.8399% ( 5) 00:09:56.046 28846.368 - 29056.925: 99.8911% ( 8) 00:09:56.046 29056.925 - 29267.483: 99.9232% ( 5) 00:09:56.046 29267.483 - 29478.040: 99.9744% ( 8) 00:09:56.046 29478.040 - 29688.598: 100.0000% ( 4) 00:09:56.046 00:09:56.046 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:56.046 ============================================================================== 00:09:56.046 Range in us Cumulative IO count 00:09:56.046 5579.772 - 5606.092: 0.0832% ( 13) 00:09:56.046 5606.092 - 5632.411: 0.1217% ( 6) 00:09:56.046 5632.411 - 5658.731: 0.1345% ( 2) 00:09:56.046 5658.731 - 5685.051: 0.1409% ( 1) 00:09:56.046 5685.051 - 5711.370: 0.1537% ( 2) 00:09:56.046 5711.370 - 5737.690: 0.1601% ( 1) 00:09:56.046 5737.690 - 5764.010: 0.1729% ( 2) 00:09:56.046 5764.010 - 5790.329: 0.1793% ( 1) 00:09:56.046 5790.329 - 5816.649: 0.1921% ( 2) 00:09:56.046 5816.649 - 5842.969: 0.2305% ( 6) 00:09:56.046 5842.969 - 5869.288: 0.2946% ( 10) 00:09:56.046 5869.288 - 5895.608: 0.3650% ( 11) 00:09:56.046 5895.608 - 5921.928: 0.4419% ( 12) 00:09:56.046 5921.928 - 5948.247: 0.5251% ( 13) 00:09:56.046 5948.247 - 5974.567: 0.7812% ( 40) 00:09:56.046 5974.567 - 6000.887: 1.0438% ( 41) 00:09:56.046 6000.887 - 6027.206: 1.1783% ( 21) 00:09:56.046 6027.206 - 6053.526: 1.3640% ( 29) 00:09:56.046 6053.526 - 6079.846: 1.6778% ( 49) 00:09:56.046 6079.846 - 6106.165: 2.4590% ( 122) 00:09:56.046 6106.165 - 6132.485: 2.8048% ( 54) 00:09:56.046 6132.485 - 6158.805: 3.1122% ( 48) 00:09:56.046 6158.805 - 6185.124: 3.4836% ( 58) 00:09:56.046 6185.124 - 6211.444: 3.9383% ( 71) 00:09:56.046 6211.444 - 6237.764: 4.7323% ( 124) 00:09:56.046 6237.764 - 6264.084: 5.4623% ( 114) 00:09:56.046 6264.084 - 6290.403: 6.5574% ( 171) 00:09:56.046 6290.403 - 6316.723: 7.5756% ( 159) 00:09:56.046 6316.723 - 6343.043: 8.4593% ( 138) 00:09:56.046 6343.043 - 6369.362: 9.7080% ( 195) 00:09:56.046 6369.362 - 6395.682: 11.3281% ( 253) 00:09:56.046 6395.682 - 6422.002: 13.0763% ( 273) 00:09:56.046 6422.002 - 6448.321: 14.2802% ( 188) 00:09:56.046 6448.321 - 6474.641: 15.3689% ( 170) 00:09:56.046 6474.641 - 6500.961: 16.2654% ( 140) 00:09:56.046 6500.961 - 6527.280: 17.3220% ( 165) 00:09:56.046 6527.280 - 6553.600: 18.2953% ( 152) 00:09:56.046 6553.600 - 6579.920: 18.8653% ( 89) 00:09:56.046 6579.920 - 6606.239: 19.3455% ( 75) 00:09:56.046 6606.239 - 6632.559: 19.8322% ( 76) 00:09:56.046 6632.559 - 6658.879: 20.2933% ( 72) 00:09:56.046 6658.879 - 6685.198: 20.6199% ( 51) 00:09:56.046 6685.198 - 6711.518: 21.2731% ( 102) 00:09:56.046 6711.518 - 6737.838: 21.8302% ( 87) 00:09:56.046 6737.838 - 6790.477: 22.5410% ( 111) 00:09:56.046 6790.477 - 6843.116: 23.3735% ( 130) 00:09:56.046 6843.116 - 6895.756: 23.7705% ( 62) 00:09:56.046 6895.756 - 6948.395: 24.1547% ( 60) 00:09:56.046 6948.395 - 7001.035: 24.4685% ( 49) 00:09:56.046 7001.035 - 7053.674: 24.6798% ( 33) 00:09:56.046 7053.674 - 7106.313: 24.9103% ( 36) 00:09:56.046 7106.313 - 7158.953: 25.2561% ( 54) 00:09:56.046 7158.953 - 7211.592: 25.3970% ( 22) 00:09:56.046 7211.592 - 7264.231: 25.6148% ( 34) 00:09:56.046 7264.231 - 7316.871: 25.7812% ( 26) 00:09:56.046 7316.871 - 7369.510: 25.9862% ( 32) 00:09:56.046 7369.510 - 7422.149: 26.1847% ( 31) 00:09:56.046 7422.149 - 7474.789: 26.7354% ( 86) 00:09:56.046 7474.789 - 7527.428: 27.4782% ( 116) 00:09:56.046 7527.428 - 7580.067: 28.0546% ( 90) 00:09:56.046 7580.067 - 7632.707: 28.7654% ( 111) 00:09:56.046 7632.707 - 7685.346: 29.6747% ( 142) 00:09:56.046 7685.346 - 7737.986: 30.6801% ( 157) 00:09:56.046 7737.986 - 7790.625: 31.1796% ( 78) 00:09:56.046 7790.625 - 7843.264: 32.1209% ( 147) 00:09:56.046 7843.264 - 7895.904: 33.4209% ( 203) 00:09:56.046 7895.904 - 7948.543: 34.7272% ( 204) 00:09:56.046 7948.543 - 8001.182: 36.2065% ( 231) 00:09:56.046 8001.182 - 8053.822: 38.1148% ( 298) 00:09:56.046 8053.822 - 8106.461: 40.4969% ( 372) 00:09:56.046 8106.461 - 8159.100: 42.5077% ( 314) 00:09:56.046 8159.100 - 8211.740: 45.6519% ( 491) 00:09:56.046 8211.740 - 8264.379: 48.1942% ( 397) 00:09:56.046 8264.379 - 8317.018: 51.4921% ( 515) 00:09:56.046 8317.018 - 8369.658: 55.1550% ( 572) 00:09:56.046 8369.658 - 8422.297: 60.0986% ( 772) 00:09:56.046 8422.297 - 8474.937: 63.5566% ( 540) 00:09:56.046 8474.937 - 8527.576: 66.4831% ( 457) 00:09:56.046 8527.576 - 8580.215: 69.3776% ( 452) 00:09:56.046 8580.215 - 8632.855: 71.9326% ( 399) 00:09:56.046 8632.855 - 8685.494: 74.4941% ( 400) 00:09:56.046 8685.494 - 8738.133: 77.5935% ( 484) 00:09:56.046 8738.133 - 8790.773: 80.5328% ( 459) 00:09:56.046 8790.773 - 8843.412: 82.8509% ( 362) 00:09:56.046 8843.412 - 8896.051: 85.0794% ( 348) 00:09:56.046 8896.051 - 8948.691: 86.6099% ( 239) 00:09:56.046 8948.691 - 9001.330: 88.0827% ( 230) 00:09:56.046 9001.330 - 9053.969: 89.2610% ( 184) 00:09:56.046 9053.969 - 9106.609: 90.3817% ( 175) 00:09:56.046 9106.609 - 9159.248: 91.4447% ( 166) 00:09:56.046 9159.248 - 9211.888: 92.4949% ( 164) 00:09:56.046 9211.888 - 9264.527: 93.3338% ( 131) 00:09:56.046 9264.527 - 9317.166: 94.0318% ( 109) 00:09:56.046 9317.166 - 9369.806: 94.5953% ( 88) 00:09:56.046 9369.806 - 9422.445: 95.1012% ( 79) 00:09:56.046 9422.445 - 9475.084: 95.5366% ( 68) 00:09:56.046 9475.084 - 9527.724: 95.9080% ( 58) 00:09:56.046 9527.724 - 9580.363: 96.3051% ( 62) 00:09:56.046 9580.363 - 9633.002: 96.5612% ( 40) 00:09:56.046 9633.002 - 9685.642: 96.7533% ( 30) 00:09:56.046 9685.642 - 9738.281: 96.8814% ( 20) 00:09:56.046 9738.281 - 9790.920: 96.9903% ( 17) 00:09:56.046 9790.920 - 9843.560: 97.0607% ( 11) 00:09:56.046 9843.560 - 9896.199: 97.1376% ( 12) 00:09:56.046 9896.199 - 9948.839: 97.2080% ( 11) 00:09:56.046 9948.839 - 10001.478: 97.2528% ( 7) 00:09:56.046 10001.478 - 10054.117: 97.2912% ( 6) 00:09:56.046 10054.117 - 10106.757: 97.3681% ( 12) 00:09:56.046 10106.757 - 10159.396: 97.5282% ( 25) 00:09:56.046 10159.396 - 10212.035: 97.7011% ( 27) 00:09:56.046 10212.035 - 10264.675: 97.7715% ( 11) 00:09:56.046 10264.675 - 10317.314: 97.8356% ( 10) 00:09:56.046 10317.314 - 10369.953: 97.9188% ( 13) 00:09:56.046 10369.953 - 10422.593: 97.9572% ( 6) 00:09:56.046 10422.593 - 10475.232: 97.9892% ( 5) 00:09:56.046 10475.232 - 10527.871: 98.0213% ( 5) 00:09:56.046 10527.871 - 10580.511: 98.1045% ( 13) 00:09:56.046 10580.511 - 10633.150: 98.1749% ( 11) 00:09:56.046 10633.150 - 10685.790: 98.2454% ( 11) 00:09:56.046 10685.790 - 10738.429: 98.2838% ( 6) 00:09:56.046 10738.429 - 10791.068: 98.3607% ( 12) 00:09:56.046 14633.741 - 14739.020: 98.3863% ( 4) 00:09:56.046 14739.020 - 14844.299: 98.4183% ( 5) 00:09:56.046 14844.299 - 14949.578: 98.4695% ( 8) 00:09:56.046 14949.578 - 15054.856: 98.5464% ( 12) 00:09:56.046 15054.856 - 15160.135: 98.6616% ( 18) 00:09:56.046 15160.135 - 15265.414: 98.7641% ( 16) 00:09:56.046 15265.414 - 15370.692: 98.8986% ( 21) 00:09:56.046 15370.692 - 15475.971: 98.9178% ( 3) 00:09:56.046 15475.971 - 15581.250: 98.9306% ( 2) 00:09:56.046 15581.250 - 15686.529: 98.9434% ( 2) 00:09:56.046 15686.529 - 15791.807: 98.9562% ( 2) 00:09:56.046 15791.807 - 15897.086: 98.9690% ( 2) 00:09:56.046 15897.086 - 16002.365: 98.9818% ( 2) 00:09:56.046 16002.365 - 16107.643: 99.0010% ( 3) 00:09:56.046 16107.643 - 16212.922: 99.0138% ( 2) 00:09:56.046 16212.922 - 16318.201: 99.0266% ( 2) 00:09:56.046 16318.201 - 16423.480: 99.0394% ( 2) 00:09:56.046 16423.480 - 16528.758: 99.0523% ( 2) 00:09:56.046 16528.758 - 16634.037: 99.0715% ( 3) 00:09:56.046 16634.037 - 16739.316: 99.0843% ( 2) 00:09:56.046 16739.316 - 16844.594: 99.0971% ( 2) 00:09:56.046 16844.594 - 16949.873: 99.1163% ( 3) 00:09:56.046 16949.873 - 17055.152: 99.1355% ( 3) 00:09:56.046 17055.152 - 17160.431: 99.1483% ( 2) 00:09:56.046 17160.431 - 17265.709: 99.1675% ( 3) 00:09:56.046 17265.709 - 17370.988: 99.1803% ( 2) 00:09:56.046 27583.023 - 27793.581: 99.1931% ( 2) 00:09:56.046 27793.581 - 28004.138: 99.2636% ( 11) 00:09:56.046 28004.138 - 28214.696: 99.3212% ( 9) 00:09:56.046 28214.696 - 28425.253: 99.3852% ( 10) 00:09:56.046 28425.253 - 28635.810: 99.4429% ( 9) 00:09:56.046 28635.810 - 28846.368: 99.5069% ( 10) 00:09:56.046 28846.368 - 29056.925: 99.5645% ( 9) 00:09:56.046 29056.925 - 29267.483: 99.6222% ( 9) 00:09:56.046 29267.483 - 29478.040: 99.6926% ( 11) 00:09:56.046 29478.040 - 29688.598: 99.7503% ( 9) 00:09:56.046 29688.598 - 29899.155: 99.8207% ( 11) 00:09:56.046 29899.155 - 30109.712: 99.8463% ( 4) 00:09:56.046 30109.712 - 30320.270: 99.8719% ( 4) 00:09:56.046 30320.270 - 30530.827: 99.8975% ( 4) 00:09:56.046 30530.827 - 30741.385: 99.9232% ( 4) 00:09:56.046 30741.385 - 30951.942: 99.9488% ( 4) 00:09:56.046 30951.942 - 31162.500: 99.9936% ( 7) 00:09:56.046 31162.500 - 31373.057: 100.0000% ( 1) 00:09:56.046 00:09:56.046 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:56.046 ============================================================================== 00:09:56.046 Range in us Cumulative IO count 00:09:56.046 5369.214 - 5395.534: 0.0064% ( 1) 00:09:56.046 5474.493 - 5500.813: 0.0128% ( 1) 00:09:56.046 5500.813 - 5527.133: 0.0256% ( 2) 00:09:56.046 5527.133 - 5553.452: 0.0448% ( 3) 00:09:56.046 5553.452 - 5579.772: 0.0576% ( 2) 00:09:56.046 5579.772 - 5606.092: 0.0704% ( 2) 00:09:56.047 5606.092 - 5632.411: 0.0832% ( 2) 00:09:56.047 5632.411 - 5658.731: 0.1601% ( 12) 00:09:56.047 5658.731 - 5685.051: 0.2177% ( 9) 00:09:56.047 5685.051 - 5711.370: 0.2818% ( 10) 00:09:56.047 5711.370 - 5737.690: 0.4226% ( 22) 00:09:56.047 5737.690 - 5764.010: 0.6019% ( 28) 00:09:56.047 5764.010 - 5790.329: 0.7300% ( 20) 00:09:56.047 5790.329 - 5816.649: 0.8517% ( 19) 00:09:56.047 5816.649 - 5842.969: 0.9349% ( 13) 00:09:56.047 5842.969 - 5869.288: 0.9862% ( 8) 00:09:56.047 5869.288 - 5895.608: 1.0310% ( 7) 00:09:56.047 5895.608 - 5921.928: 1.0694% ( 6) 00:09:56.047 5921.928 - 5948.247: 1.1719% ( 16) 00:09:56.047 5948.247 - 5974.567: 1.2615% ( 14) 00:09:56.047 5974.567 - 6000.887: 1.6650% ( 63) 00:09:56.047 6000.887 - 6027.206: 2.0364% ( 58) 00:09:56.047 6027.206 - 6053.526: 2.4078% ( 58) 00:09:56.047 6053.526 - 6079.846: 2.5102% ( 16) 00:09:56.047 6079.846 - 6106.165: 2.6127% ( 16) 00:09:56.047 6106.165 - 6132.485: 2.7728% ( 25) 00:09:56.047 6132.485 - 6158.805: 3.0161% ( 38) 00:09:56.047 6158.805 - 6185.124: 3.3299% ( 49) 00:09:56.047 6185.124 - 6211.444: 3.7077% ( 59) 00:09:56.047 6211.444 - 6237.764: 4.7579% ( 164) 00:09:56.047 6237.764 - 6264.084: 5.8210% ( 166) 00:09:56.047 6264.084 - 6290.403: 7.1337% ( 205) 00:09:56.047 6290.403 - 6316.723: 7.5179% ( 60) 00:09:56.047 6316.723 - 6343.043: 7.9982% ( 75) 00:09:56.047 6343.043 - 6369.362: 9.0100% ( 158) 00:09:56.047 6369.362 - 6395.682: 9.4711% ( 72) 00:09:56.047 6395.682 - 6422.002: 10.5020% ( 161) 00:09:56.047 6422.002 - 6448.321: 11.6739% ( 183) 00:09:56.047 6448.321 - 6474.641: 12.6089% ( 146) 00:09:56.047 6474.641 - 6500.961: 13.7359% ( 176) 00:09:56.047 6500.961 - 6527.280: 16.0028% ( 354) 00:09:56.047 6527.280 - 6553.600: 16.8673% ( 135) 00:09:56.047 6553.600 - 6579.920: 18.3081% ( 225) 00:09:56.047 6579.920 - 6606.239: 19.9283% ( 253) 00:09:56.047 6606.239 - 6632.559: 20.6967% ( 120) 00:09:56.047 6632.559 - 6658.879: 20.9977% ( 47) 00:09:56.047 6658.879 - 6685.198: 21.2538% ( 40) 00:09:56.047 6685.198 - 6711.518: 21.4844% ( 36) 00:09:56.047 6711.518 - 6737.838: 22.0223% ( 84) 00:09:56.047 6737.838 - 6790.477: 22.7395% ( 112) 00:09:56.047 6790.477 - 6843.116: 23.3799% ( 100) 00:09:56.047 6843.116 - 6895.756: 23.7641% ( 60) 00:09:56.047 6895.756 - 6948.395: 24.1227% ( 56) 00:09:56.047 6948.395 - 7001.035: 24.7310% ( 95) 00:09:56.047 7001.035 - 7053.674: 25.1345% ( 63) 00:09:56.047 7053.674 - 7106.313: 25.7428% ( 95) 00:09:56.047 7106.313 - 7158.953: 25.9798% ( 37) 00:09:56.047 7158.953 - 7211.592: 26.1911% ( 33) 00:09:56.047 7211.592 - 7264.231: 26.4216% ( 36) 00:09:56.047 7264.231 - 7316.871: 26.7098% ( 45) 00:09:56.047 7316.871 - 7369.510: 27.0364% ( 51) 00:09:56.047 7369.510 - 7422.149: 27.4014% ( 57) 00:09:56.047 7422.149 - 7474.789: 28.1122% ( 111) 00:09:56.047 7474.789 - 7527.428: 29.2008% ( 170) 00:09:56.047 7527.428 - 7580.067: 30.3407% ( 178) 00:09:56.047 7580.067 - 7632.707: 31.6278% ( 201) 00:09:56.047 7632.707 - 7685.346: 32.5628% ( 146) 00:09:56.047 7685.346 - 7737.986: 34.2982% ( 271) 00:09:56.047 7737.986 - 7790.625: 35.4508% ( 180) 00:09:56.047 7790.625 - 7843.264: 36.7508% ( 203) 00:09:56.047 7843.264 - 7895.904: 38.1404% ( 217) 00:09:56.047 7895.904 - 7948.543: 40.1127% ( 308) 00:09:56.047 7948.543 - 8001.182: 42.3220% ( 345) 00:09:56.047 8001.182 - 8053.822: 43.9805% ( 259) 00:09:56.047 8053.822 - 8106.461: 45.6327% ( 258) 00:09:56.047 8106.461 - 8159.100: 47.9828% ( 367) 00:09:56.047 8159.100 - 8211.740: 49.9039% ( 300) 00:09:56.047 8211.740 - 8264.379: 51.8186% ( 299) 00:09:56.047 8264.379 - 8317.018: 54.0215% ( 344) 00:09:56.047 8317.018 - 8369.658: 56.3717% ( 367) 00:09:56.047 8369.658 - 8422.297: 58.9716% ( 406) 00:09:56.047 8422.297 - 8474.937: 62.0645% ( 483) 00:09:56.047 8474.937 - 8527.576: 64.4339% ( 370) 00:09:56.047 8527.576 - 8580.215: 67.1171% ( 419) 00:09:56.047 8580.215 - 8632.855: 70.0756% ( 462) 00:09:56.047 8632.855 - 8685.494: 72.9060% ( 442) 00:09:56.047 8685.494 - 8738.133: 75.8645% ( 462) 00:09:56.047 8738.133 - 8790.773: 78.0097% ( 335) 00:09:56.047 8790.773 - 8843.412: 80.2062% ( 343) 00:09:56.047 8843.412 - 8896.051: 82.3258% ( 331) 00:09:56.047 8896.051 - 8948.691: 84.0356% ( 267) 00:09:56.047 8948.691 - 9001.330: 85.7198% ( 263) 00:09:56.047 9001.330 - 9053.969: 86.9685% ( 195) 00:09:56.047 9053.969 - 9106.609: 88.1596% ( 186) 00:09:56.047 9106.609 - 9159.248: 89.1842% ( 160) 00:09:56.047 9159.248 - 9211.888: 90.1255% ( 147) 00:09:56.047 9211.888 - 9264.527: 90.9452% ( 128) 00:09:56.047 9264.527 - 9317.166: 91.6944% ( 117) 00:09:56.047 9317.166 - 9369.806: 92.3028% ( 95) 00:09:56.047 9369.806 - 9422.445: 92.8471% ( 85) 00:09:56.047 9422.445 - 9475.084: 93.4810% ( 99) 00:09:56.047 9475.084 - 9527.724: 93.9357% ( 71) 00:09:56.047 9527.724 - 9580.363: 94.3648% ( 67) 00:09:56.047 9580.363 - 9633.002: 94.9155% ( 86) 00:09:56.047 9633.002 - 9685.642: 95.3445% ( 67) 00:09:56.047 9685.642 - 9738.281: 95.7095% ( 57) 00:09:56.047 9738.281 - 9790.920: 95.9913% ( 44) 00:09:56.047 9790.920 - 9843.560: 96.2090% ( 34) 00:09:56.047 9843.560 - 9896.199: 96.4716% ( 41) 00:09:56.047 9896.199 - 9948.839: 96.7021% ( 36) 00:09:56.047 9948.839 - 10001.478: 96.9711% ( 42) 00:09:56.047 10001.478 - 10054.117: 97.2144% ( 38) 00:09:56.047 10054.117 - 10106.757: 97.4513% ( 37) 00:09:56.047 10106.757 - 10159.396: 97.6819% ( 36) 00:09:56.047 10159.396 - 10212.035: 97.7907% ( 17) 00:09:56.047 10212.035 - 10264.675: 97.8676% ( 12) 00:09:56.047 10264.675 - 10317.314: 97.9380% ( 11) 00:09:56.047 10317.314 - 10369.953: 97.9764% ( 6) 00:09:56.047 10369.953 - 10422.593: 98.0149% ( 6) 00:09:56.047 10422.593 - 10475.232: 98.0533% ( 6) 00:09:56.047 10475.232 - 10527.871: 98.0981% ( 7) 00:09:56.047 10527.871 - 10580.511: 98.1365% ( 6) 00:09:56.047 10580.511 - 10633.150: 98.1814% ( 7) 00:09:56.047 10633.150 - 10685.790: 98.2134% ( 5) 00:09:56.047 10685.790 - 10738.429: 98.2582% ( 7) 00:09:56.047 10738.429 - 10791.068: 98.2966% ( 6) 00:09:56.047 10791.068 - 10843.708: 98.3286% ( 5) 00:09:56.047 10843.708 - 10896.347: 98.3478% ( 3) 00:09:56.047 10896.347 - 10948.986: 98.3607% ( 2) 00:09:56.047 12738.724 - 12791.364: 98.3671% ( 1) 00:09:56.047 13054.561 - 13107.200: 98.3799% ( 2) 00:09:56.047 13107.200 - 13159.839: 98.3927% ( 2) 00:09:56.047 13159.839 - 13212.479: 98.4055% ( 2) 00:09:56.047 13212.479 - 13265.118: 98.4119% ( 1) 00:09:56.047 13265.118 - 13317.757: 98.4247% ( 2) 00:09:56.047 13317.757 - 13370.397: 98.4375% ( 2) 00:09:56.047 13370.397 - 13423.036: 98.4503% ( 2) 00:09:56.047 13423.036 - 13475.676: 98.4631% ( 2) 00:09:56.047 13475.676 - 13580.954: 98.5207% ( 9) 00:09:56.047 13580.954 - 13686.233: 98.5912% ( 11) 00:09:56.047 13686.233 - 13791.512: 98.7193% ( 20) 00:09:56.047 13791.512 - 13896.790: 98.8601% ( 22) 00:09:56.047 13896.790 - 14002.069: 98.9114% ( 8) 00:09:56.047 14002.069 - 14107.348: 98.9498% ( 6) 00:09:56.047 14107.348 - 14212.627: 98.9690% ( 3) 00:09:56.047 14212.627 - 14317.905: 98.9882% ( 3) 00:09:56.047 14317.905 - 14423.184: 99.0074% ( 3) 00:09:56.047 14423.184 - 14528.463: 99.0266% ( 3) 00:09:56.047 14528.463 - 14633.741: 99.0459% ( 3) 00:09:56.047 14633.741 - 14739.020: 99.0651% ( 3) 00:09:56.047 14739.020 - 14844.299: 99.0779% ( 2) 00:09:56.047 14844.299 - 14949.578: 99.0971% ( 3) 00:09:56.047 14949.578 - 15054.856: 99.1227% ( 4) 00:09:56.047 15054.856 - 15160.135: 99.1419% ( 3) 00:09:56.047 15160.135 - 15265.414: 99.1547% ( 2) 00:09:56.047 15265.414 - 15370.692: 99.1739% ( 3) 00:09:56.047 15370.692 - 15475.971: 99.1803% ( 1) 00:09:56.047 29899.155 - 30109.712: 99.2059% ( 4) 00:09:56.047 30109.712 - 30320.270: 99.2508% ( 7) 00:09:56.047 30320.270 - 30530.827: 99.3084% ( 9) 00:09:56.047 30530.827 - 30741.385: 99.3724% ( 10) 00:09:56.047 30741.385 - 30951.942: 99.4237% ( 8) 00:09:56.047 30951.942 - 31162.500: 99.4749% ( 8) 00:09:56.047 31162.500 - 31373.057: 99.5645% ( 14) 00:09:56.047 31373.057 - 31583.614: 99.6798% ( 18) 00:09:56.047 31583.614 - 31794.172: 99.8591% ( 28) 00:09:56.047 31794.172 - 32004.729: 99.9616% ( 16) 00:09:56.047 32004.729 - 32215.287: 100.0000% ( 6) 00:09:56.047 00:09:56.047 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:56.047 ============================================================================== 00:09:56.047 Range in us Cumulative IO count 00:09:56.047 4737.542 - 4763.862: 0.0064% ( 1) 00:09:56.047 4816.501 - 4842.821: 0.0128% ( 1) 00:09:56.047 4842.821 - 4869.141: 0.0256% ( 2) 00:09:56.047 4869.141 - 4895.460: 0.0384% ( 2) 00:09:56.047 4895.460 - 4921.780: 0.0512% ( 2) 00:09:56.047 4921.780 - 4948.100: 0.0768% ( 4) 00:09:56.047 4948.100 - 4974.419: 0.1089% ( 5) 00:09:56.047 4974.419 - 5000.739: 0.1409% ( 5) 00:09:56.047 5000.739 - 5027.059: 0.1793% ( 6) 00:09:56.047 5027.059 - 5053.378: 0.2113% ( 5) 00:09:56.047 5053.378 - 5079.698: 0.2433% ( 5) 00:09:56.047 5079.698 - 5106.018: 0.3586% ( 18) 00:09:56.047 5106.018 - 5132.337: 0.4803% ( 19) 00:09:56.047 5132.337 - 5158.657: 0.5763% ( 15) 00:09:56.047 5158.657 - 5184.977: 0.6084% ( 5) 00:09:56.047 5184.977 - 5211.296: 0.6340% ( 4) 00:09:56.047 5211.296 - 5237.616: 0.6660% ( 5) 00:09:56.047 5237.616 - 5263.936: 0.6852% ( 3) 00:09:56.047 5263.936 - 5290.255: 0.7044% ( 3) 00:09:56.047 5290.255 - 5316.575: 0.7172% ( 2) 00:09:56.047 5316.575 - 5342.895: 0.7300% ( 2) 00:09:56.047 5342.895 - 5369.214: 0.7428% ( 2) 00:09:56.047 5369.214 - 5395.534: 0.7556% ( 2) 00:09:56.047 5395.534 - 5421.854: 0.7684% ( 2) 00:09:56.047 5421.854 - 5448.173: 0.7748% ( 1) 00:09:56.047 5448.173 - 5474.493: 0.7877% ( 2) 00:09:56.047 5474.493 - 5500.813: 0.8069% ( 3) 00:09:56.047 5500.813 - 5527.133: 0.8133% ( 1) 00:09:56.047 5527.133 - 5553.452: 0.8197% ( 1) 00:09:56.047 5632.411 - 5658.731: 0.8389% ( 3) 00:09:56.047 5658.731 - 5685.051: 0.8645% ( 4) 00:09:56.047 5685.051 - 5711.370: 0.8965% ( 5) 00:09:56.047 5711.370 - 5737.690: 0.9285% ( 5) 00:09:56.047 5737.690 - 5764.010: 0.9670% ( 6) 00:09:56.047 5764.010 - 5790.329: 1.0182% ( 8) 00:09:56.047 5790.329 - 5816.649: 1.0758% ( 9) 00:09:56.047 5816.649 - 5842.969: 1.1142% ( 6) 00:09:56.047 5842.969 - 5869.288: 1.1847% ( 11) 00:09:56.047 5869.288 - 5895.608: 1.2615% ( 12) 00:09:56.047 5895.608 - 5921.928: 1.3768% ( 18) 00:09:56.047 5921.928 - 5948.247: 1.8186% ( 69) 00:09:56.047 5948.247 - 5974.567: 1.9467% ( 20) 00:09:56.047 5974.567 - 6000.887: 2.1324% ( 29) 00:09:56.047 6000.887 - 6027.206: 2.6127% ( 75) 00:09:56.047 6027.206 - 6053.526: 2.8048% ( 30) 00:09:56.047 6053.526 - 6079.846: 2.9841% ( 28) 00:09:56.047 6079.846 - 6106.165: 3.2147% ( 36) 00:09:56.047 6106.165 - 6132.485: 3.7013% ( 76) 00:09:56.047 6132.485 - 6158.805: 4.4762% ( 121) 00:09:56.047 6158.805 - 6185.124: 4.8668% ( 61) 00:09:56.047 6185.124 - 6211.444: 5.2190% ( 55) 00:09:56.047 6211.444 - 6237.764: 5.7697% ( 86) 00:09:56.047 6237.764 - 6264.084: 6.4549% ( 107) 00:09:56.047 6264.084 - 6290.403: 7.7421% ( 201) 00:09:56.047 6290.403 - 6316.723: 8.3888% ( 101) 00:09:56.047 6316.723 - 6343.043: 9.1893% ( 125) 00:09:56.047 6343.043 - 6369.362: 9.7272% ( 84) 00:09:56.047 6369.362 - 6395.682: 10.6557% ( 145) 00:09:56.047 6395.682 - 6422.002: 11.6419% ( 154) 00:09:56.047 6422.002 - 6448.321: 12.1990% ( 87) 00:09:56.047 6448.321 - 6474.641: 13.6975% ( 234) 00:09:56.047 6474.641 - 6500.961: 15.3624% ( 260) 00:09:56.047 6500.961 - 6527.280: 16.1437% ( 122) 00:09:56.047 6527.280 - 6553.600: 17.0018% ( 134) 00:09:56.047 6553.600 - 6579.920: 17.7446% ( 116) 00:09:56.047 6579.920 - 6606.239: 18.5259% ( 122) 00:09:56.047 6606.239 - 6632.559: 19.1662% ( 100) 00:09:56.047 6632.559 - 6658.879: 19.5825% ( 65) 00:09:56.047 6658.879 - 6685.198: 19.9923% ( 64) 00:09:56.047 6685.198 - 6711.518: 20.4406% ( 70) 00:09:56.047 6711.518 - 6737.838: 21.1834% ( 116) 00:09:56.047 6737.838 - 6790.477: 21.7149% ( 83) 00:09:56.047 6790.477 - 6843.116: 22.1568% ( 69) 00:09:56.047 6843.116 - 6895.756: 22.9636% ( 126) 00:09:56.047 6895.756 - 6948.395: 23.4055% ( 69) 00:09:56.047 6948.395 - 7001.035: 23.8665% ( 72) 00:09:56.047 7001.035 - 7053.674: 24.3532% ( 76) 00:09:56.047 7053.674 - 7106.313: 24.9232% ( 89) 00:09:56.047 7106.313 - 7158.953: 25.5443% ( 97) 00:09:56.047 7158.953 - 7211.592: 26.1014% ( 87) 00:09:56.047 7211.592 - 7264.231: 26.7354% ( 99) 00:09:56.047 7264.231 - 7316.871: 27.2285% ( 77) 00:09:56.047 7316.871 - 7369.510: 27.8240% ( 93) 00:09:56.047 7369.510 - 7422.149: 28.6693% ( 132) 00:09:56.047 7422.149 - 7474.789: 29.4314% ( 119) 00:09:56.047 7474.789 - 7527.428: 30.2382% ( 126) 00:09:56.047 7527.428 - 7580.067: 30.9746% ( 115) 00:09:56.047 7580.067 - 7632.707: 31.7495% ( 121) 00:09:56.047 7632.707 - 7685.346: 32.8125% ( 166) 00:09:56.047 7685.346 - 7737.986: 33.9075% ( 171) 00:09:56.047 7737.986 - 7790.625: 35.2651% ( 212) 00:09:56.047 7790.625 - 7843.264: 36.3730% ( 173) 00:09:56.047 7843.264 - 7895.904: 37.2503% ( 137) 00:09:56.047 7895.904 - 7948.543: 38.2364% ( 154) 00:09:56.047 7948.543 - 8001.182: 39.5620% ( 207) 00:09:56.047 8001.182 - 8053.822: 41.1373% ( 246) 00:09:56.047 8053.822 - 8106.461: 42.8471% ( 267) 00:09:56.047 8106.461 - 8159.100: 44.6145% ( 276) 00:09:56.047 8159.100 - 8211.740: 46.3563% ( 272) 00:09:56.047 8211.740 - 8264.379: 48.7449% ( 373) 00:09:56.047 8264.379 - 8317.018: 51.6009% ( 446) 00:09:56.047 8317.018 - 8369.658: 54.4762% ( 449) 00:09:56.047 8369.658 - 8422.297: 57.5820% ( 485) 00:09:56.047 8422.297 - 8474.937: 60.6557% ( 480) 00:09:56.047 8474.937 - 8527.576: 63.7487% ( 483) 00:09:56.047 8527.576 - 8580.215: 66.9378% ( 498) 00:09:56.047 8580.215 - 8632.855: 70.2613% ( 519) 00:09:56.047 8632.855 - 8685.494: 73.5272% ( 510) 00:09:56.047 8685.494 - 8738.133: 76.1655% ( 412) 00:09:56.047 8738.133 - 8790.773: 78.2787% ( 330) 00:09:56.047 8790.773 - 8843.412: 80.4816% ( 344) 00:09:56.047 8843.412 - 8896.051: 82.6396% ( 337) 00:09:56.047 8896.051 - 8948.691: 84.6119% ( 308) 00:09:56.047 8948.691 - 9001.330: 86.1040% ( 233) 00:09:56.047 9001.330 - 9053.969: 87.5832% ( 231) 00:09:56.047 9053.969 - 9106.609: 88.8640% ( 200) 00:09:56.047 9106.609 - 9159.248: 89.8950% ( 161) 00:09:56.047 9159.248 - 9211.888: 91.0028% ( 173) 00:09:56.047 9211.888 - 9264.527: 92.0978% ( 171) 00:09:56.047 9264.527 - 9317.166: 93.0200% ( 144) 00:09:56.047 9317.166 - 9369.806: 93.8845% ( 135) 00:09:56.047 9369.806 - 9422.445: 94.4736% ( 92) 00:09:56.047 9422.445 - 9475.084: 94.9603% ( 76) 00:09:56.047 9475.084 - 9527.724: 95.4342% ( 74) 00:09:56.047 9527.724 - 9580.363: 95.8440% ( 64) 00:09:56.047 9580.363 - 9633.002: 96.3179% ( 74) 00:09:56.047 9633.002 - 9685.642: 96.6637% ( 54) 00:09:56.047 9685.642 - 9738.281: 96.9198% ( 40) 00:09:56.047 9738.281 - 9790.920: 97.1247% ( 32) 00:09:56.047 9790.920 - 9843.560: 97.3297% ( 32) 00:09:56.047 9843.560 - 9896.199: 97.4962% ( 26) 00:09:56.047 9896.199 - 9948.839: 97.6370% ( 22) 00:09:56.047 9948.839 - 10001.478: 97.7587% ( 19) 00:09:56.047 10001.478 - 10054.117: 97.9316% ( 27) 00:09:56.047 10054.117 - 10106.757: 98.0277% ( 15) 00:09:56.047 10106.757 - 10159.396: 98.0981% ( 11) 00:09:56.047 10159.396 - 10212.035: 98.1557% ( 9) 00:09:56.047 10212.035 - 10264.675: 98.2262% ( 11) 00:09:56.047 10264.675 - 10317.314: 98.2646% ( 6) 00:09:56.047 10317.314 - 10369.953: 98.2902% ( 4) 00:09:56.047 10369.953 - 10422.593: 98.3094% ( 3) 00:09:56.047 10422.593 - 10475.232: 98.3350% ( 4) 00:09:56.047 10475.232 - 10527.871: 98.3607% ( 4) 00:09:56.047 11580.659 - 11633.298: 98.3671% ( 1) 00:09:56.048 11843.855 - 11896.495: 98.3735% ( 1) 00:09:56.048 11896.495 - 11949.134: 98.3863% ( 2) 00:09:56.048 11949.134 - 12001.773: 98.3991% ( 2) 00:09:56.048 12001.773 - 12054.413: 98.4119% ( 2) 00:09:56.048 12054.413 - 12107.052: 98.4375% ( 4) 00:09:56.048 12107.052 - 12159.692: 98.4759% ( 6) 00:09:56.048 12159.692 - 12212.331: 98.5143% ( 6) 00:09:56.048 12212.331 - 12264.970: 98.5784% ( 10) 00:09:56.048 12264.970 - 12317.610: 98.6744% ( 15) 00:09:56.048 12317.610 - 12370.249: 98.7769% ( 16) 00:09:56.048 12370.249 - 12422.888: 98.8601% ( 13) 00:09:56.048 12422.888 - 12475.528: 98.9626% ( 16) 00:09:56.048 12475.528 - 12528.167: 99.0010% ( 6) 00:09:56.048 12580.806 - 12633.446: 99.0138% ( 2) 00:09:56.048 12686.085 - 12738.724: 99.0266% ( 2) 00:09:56.048 12738.724 - 12791.364: 99.0330% ( 1) 00:09:56.048 12791.364 - 12844.003: 99.0459% ( 2) 00:09:56.048 12844.003 - 12896.643: 99.0523% ( 1) 00:09:56.048 12896.643 - 12949.282: 99.0651% ( 2) 00:09:56.048 12949.282 - 13001.921: 99.0779% ( 2) 00:09:56.048 13001.921 - 13054.561: 99.0843% ( 1) 00:09:56.048 13054.561 - 13107.200: 99.0907% ( 1) 00:09:56.048 13107.200 - 13159.839: 99.1035% ( 2) 00:09:56.048 13159.839 - 13212.479: 99.1099% ( 1) 00:09:56.048 13212.479 - 13265.118: 99.1163% ( 1) 00:09:56.048 13265.118 - 13317.757: 99.1227% ( 1) 00:09:56.048 13317.757 - 13370.397: 99.1355% ( 2) 00:09:56.048 13370.397 - 13423.036: 99.1419% ( 1) 00:09:56.048 13423.036 - 13475.676: 99.1547% ( 2) 00:09:56.048 13475.676 - 13580.954: 99.1611% ( 1) 00:09:56.048 13580.954 - 13686.233: 99.1803% ( 3) 00:09:56.048 31794.172 - 32004.729: 99.2252% ( 7) 00:09:56.048 32004.729 - 32215.287: 99.2892% ( 10) 00:09:56.048 32215.287 - 32425.844: 99.3596% ( 11) 00:09:56.048 32425.844 - 32636.402: 99.4237% ( 10) 00:09:56.048 32636.402 - 32846.959: 99.4813% ( 9) 00:09:56.048 32846.959 - 33057.516: 99.5645% ( 13) 00:09:56.048 33057.516 - 33268.074: 99.6414% ( 12) 00:09:56.048 33268.074 - 33478.631: 99.7310% ( 14) 00:09:56.048 33478.631 - 33689.189: 99.7695% ( 6) 00:09:56.048 33689.189 - 33899.746: 99.8015% ( 5) 00:09:56.048 33899.746 - 34110.304: 99.8463% ( 7) 00:09:56.048 34110.304 - 34320.861: 99.8783% ( 5) 00:09:56.048 34320.861 - 34531.418: 99.9168% ( 6) 00:09:56.048 34531.418 - 34741.976: 99.9680% ( 8) 00:09:56.048 34741.976 - 34952.533: 100.0000% ( 5) 00:09:56.048 00:09:56.048 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:56.048 ============================================================================== 00:09:56.048 Range in us Cumulative IO count 00:09:56.048 4132.190 - 4158.509: 0.0256% ( 4) 00:09:56.048 4158.509 - 4184.829: 0.1217% ( 15) 00:09:56.048 4184.829 - 4211.149: 0.2177% ( 15) 00:09:56.048 4211.149 - 4237.468: 0.3074% ( 14) 00:09:56.048 4237.468 - 4263.788: 0.3906% ( 13) 00:09:56.048 4263.788 - 4290.108: 0.4739% ( 13) 00:09:56.048 4290.108 - 4316.427: 0.5571% ( 13) 00:09:56.048 4316.427 - 4342.747: 0.6404% ( 13) 00:09:56.048 4342.747 - 4369.067: 0.6532% ( 2) 00:09:56.048 4369.067 - 4395.386: 0.6660% ( 2) 00:09:56.048 4395.386 - 4421.706: 0.6788% ( 2) 00:09:56.048 4421.706 - 4448.026: 0.6980% ( 3) 00:09:56.048 4448.026 - 4474.345: 0.7108% ( 2) 00:09:56.048 4474.345 - 4500.665: 0.7236% ( 2) 00:09:56.048 4500.665 - 4526.985: 0.7364% ( 2) 00:09:56.048 4526.985 - 4553.304: 0.7556% ( 3) 00:09:56.048 4553.304 - 4579.624: 0.7684% ( 2) 00:09:56.048 4579.624 - 4605.944: 0.7812% ( 2) 00:09:56.048 4605.944 - 4632.263: 0.7941% ( 2) 00:09:56.048 4632.263 - 4658.583: 0.8069% ( 2) 00:09:56.048 4658.583 - 4684.903: 0.8197% ( 2) 00:09:56.048 5658.731 - 5685.051: 0.8261% ( 1) 00:09:56.048 5685.051 - 5711.370: 0.8325% ( 1) 00:09:56.048 5711.370 - 5737.690: 0.8389% ( 1) 00:09:56.048 5737.690 - 5764.010: 0.8581% ( 3) 00:09:56.048 5764.010 - 5790.329: 0.8773% ( 3) 00:09:56.048 5790.329 - 5816.649: 0.9221% ( 7) 00:09:56.048 5816.649 - 5842.969: 0.9798% ( 9) 00:09:56.048 5842.969 - 5869.288: 1.0630% ( 13) 00:09:56.048 5869.288 - 5895.608: 1.1527% ( 14) 00:09:56.048 5895.608 - 5921.928: 1.2679% ( 18) 00:09:56.048 5921.928 - 5948.247: 1.3896% ( 19) 00:09:56.048 5948.247 - 5974.567: 1.5369% ( 23) 00:09:56.048 5974.567 - 6000.887: 1.6778% ( 22) 00:09:56.048 6000.887 - 6027.206: 1.8507% ( 27) 00:09:56.048 6027.206 - 6053.526: 2.0812% ( 36) 00:09:56.048 6053.526 - 6079.846: 2.3566% ( 43) 00:09:56.048 6079.846 - 6106.165: 2.7536% ( 62) 00:09:56.048 6106.165 - 6132.485: 3.4516% ( 109) 00:09:56.048 6132.485 - 6158.805: 3.9255% ( 74) 00:09:56.048 6158.805 - 6185.124: 4.8220% ( 140) 00:09:56.048 6185.124 - 6211.444: 5.7121% ( 139) 00:09:56.048 6211.444 - 6237.764: 6.2820% ( 89) 00:09:56.048 6237.764 - 6264.084: 6.8584% ( 90) 00:09:56.048 6264.084 - 6290.403: 7.4667% ( 95) 00:09:56.048 6290.403 - 6316.723: 8.2095% ( 116) 00:09:56.048 6316.723 - 6343.043: 9.1124% ( 141) 00:09:56.048 6343.043 - 6369.362: 10.7262% ( 252) 00:09:56.048 6369.362 - 6395.682: 11.6099% ( 138) 00:09:56.048 6395.682 - 6422.002: 12.6409% ( 161) 00:09:56.048 6422.002 - 6448.321: 13.3005% ( 103) 00:09:56.048 6448.321 - 6474.641: 14.0305% ( 114) 00:09:56.048 6474.641 - 6500.961: 15.2152% ( 185) 00:09:56.048 6500.961 - 6527.280: 15.9836% ( 120) 00:09:56.048 6527.280 - 6553.600: 16.7520% ( 120) 00:09:56.048 6553.600 - 6579.920: 17.3604% ( 95) 00:09:56.048 6579.920 - 6606.239: 17.9559% ( 93) 00:09:56.048 6606.239 - 6632.559: 18.7564% ( 125) 00:09:56.048 6632.559 - 6658.879: 19.7810% ( 160) 00:09:56.048 6658.879 - 6685.198: 20.2613% ( 75) 00:09:56.048 6685.198 - 6711.518: 20.6327% ( 58) 00:09:56.048 6711.518 - 6737.838: 20.9849% ( 55) 00:09:56.048 6737.838 - 6790.477: 21.5996% ( 96) 00:09:56.048 6790.477 - 6843.116: 22.1824% ( 91) 00:09:56.048 6843.116 - 6895.756: 22.6947% ( 80) 00:09:56.048 6895.756 - 6948.395: 23.3094% ( 96) 00:09:56.048 6948.395 - 7001.035: 24.0010% ( 108) 00:09:56.048 7001.035 - 7053.674: 24.6350% ( 99) 00:09:56.048 7053.674 - 7106.313: 25.0256% ( 61) 00:09:56.048 7106.313 - 7158.953: 25.5827% ( 87) 00:09:56.048 7158.953 - 7211.592: 26.1078% ( 82) 00:09:56.048 7211.592 - 7264.231: 26.5625% ( 71) 00:09:56.048 7264.231 - 7316.871: 26.8763% ( 49) 00:09:56.048 7316.871 - 7369.510: 27.2797% ( 63) 00:09:56.048 7369.510 - 7422.149: 27.9201% ( 100) 00:09:56.048 7422.149 - 7474.789: 28.6309% ( 111) 00:09:56.048 7474.789 - 7527.428: 29.5402% ( 142) 00:09:56.048 7527.428 - 7580.067: 30.2254% ( 107) 00:09:56.048 7580.067 - 7632.707: 30.9554% ( 114) 00:09:56.048 7632.707 - 7685.346: 31.9416% ( 154) 00:09:56.048 7685.346 - 7737.986: 32.6716% ( 114) 00:09:56.048 7737.986 - 7790.625: 33.5361% ( 135) 00:09:56.048 7790.625 - 7843.264: 34.2918% ( 118) 00:09:56.048 7843.264 - 7895.904: 35.2011% ( 142) 00:09:56.048 7895.904 - 7948.543: 36.4562% ( 196) 00:09:56.048 7948.543 - 8001.182: 37.8906% ( 224) 00:09:56.048 8001.182 - 8053.822: 39.7413% ( 289) 00:09:56.048 8053.822 - 8106.461: 41.7008% ( 306) 00:09:56.048 8106.461 - 8159.100: 44.0894% ( 373) 00:09:56.048 8159.100 - 8211.740: 46.7149% ( 410) 00:09:56.048 8211.740 - 8264.379: 49.9872% ( 511) 00:09:56.048 8264.379 - 8317.018: 53.2403% ( 508) 00:09:56.048 8317.018 - 8369.658: 56.1027% ( 447) 00:09:56.048 8369.658 - 8422.297: 59.0612% ( 462) 00:09:56.048 8422.297 - 8474.937: 62.0517% ( 467) 00:09:56.048 8474.937 - 8527.576: 65.1447% ( 483) 00:09:56.048 8527.576 - 8580.215: 68.5067% ( 525) 00:09:56.048 8580.215 - 8632.855: 71.5484% ( 475) 00:09:56.048 8632.855 - 8685.494: 74.3020% ( 430) 00:09:56.048 8685.494 - 8738.133: 77.0044% ( 422) 00:09:56.048 8738.133 - 8790.773: 79.4570% ( 383) 00:09:56.048 8790.773 - 8843.412: 81.6983% ( 350) 00:09:56.048 8843.412 - 8896.051: 83.7474% ( 320) 00:09:56.048 8896.051 - 8948.691: 85.8607% ( 330) 00:09:56.048 8948.691 - 9001.330: 87.3911% ( 239) 00:09:56.048 9001.330 - 9053.969: 88.9793% ( 248) 00:09:56.048 9053.969 - 9106.609: 90.2408% ( 197) 00:09:56.048 9106.609 - 9159.248: 91.2846% ( 163) 00:09:56.048 9159.248 - 9211.888: 92.2387% ( 149) 00:09:56.048 9211.888 - 9264.527: 93.0840% ( 132) 00:09:56.048 9264.527 - 9317.166: 93.7884% ( 110) 00:09:56.048 9317.166 - 9369.806: 94.3776% ( 92) 00:09:56.048 9369.806 - 9422.445: 94.9219% ( 85) 00:09:56.048 9422.445 - 9475.084: 95.3445% ( 66) 00:09:56.048 9475.084 - 9527.724: 95.7223% ( 59) 00:09:56.048 9527.724 - 9580.363: 95.9849% ( 41) 00:09:56.048 9580.363 - 9633.002: 96.2731% ( 45) 00:09:56.048 9633.002 - 9685.642: 96.6317% ( 56) 00:09:56.048 9685.642 - 9738.281: 96.7725% ( 22) 00:09:56.048 9738.281 - 9790.920: 96.9070% ( 21) 00:09:56.048 9790.920 - 9843.560: 97.0607% ( 24) 00:09:56.048 9843.560 - 9896.199: 97.1696% ( 17) 00:09:56.048 9896.199 - 9948.839: 97.3040% ( 21) 00:09:56.048 9948.839 - 10001.478: 97.4385% ( 21) 00:09:56.048 10001.478 - 10054.117: 97.5474% ( 17) 00:09:56.048 10054.117 - 10106.757: 97.6562% ( 17) 00:09:56.048 10106.757 - 10159.396: 97.7651% ( 17) 00:09:56.048 10159.396 - 10212.035: 97.8484% ( 13) 00:09:56.048 10212.035 - 10264.675: 97.9124% ( 10) 00:09:56.048 10264.675 - 10317.314: 98.1365% ( 35) 00:09:56.048 10317.314 - 10369.953: 98.1621% ( 4) 00:09:56.048 10369.953 - 10422.593: 98.1942% ( 5) 00:09:56.048 10422.593 - 10475.232: 98.2070% ( 2) 00:09:56.048 10475.232 - 10527.871: 98.2262% ( 3) 00:09:56.048 10527.871 - 10580.511: 98.2454% ( 3) 00:09:56.048 10580.511 - 10633.150: 98.2646% ( 3) 00:09:56.048 10633.150 - 10685.790: 98.2838% ( 3) 00:09:56.048 10685.790 - 10738.429: 98.3030% ( 3) 00:09:56.048 10738.429 - 10791.068: 98.3158% ( 2) 00:09:56.048 10791.068 - 10843.708: 98.3350% ( 3) 00:09:56.048 10843.708 - 10896.347: 98.3478% ( 2) 00:09:56.048 10896.347 - 10948.986: 98.3671% ( 3) 00:09:56.048 11106.904 - 11159.544: 98.3735% ( 1) 00:09:56.048 11317.462 - 11370.101: 98.3799% ( 1) 00:09:56.048 11422.741 - 11475.380: 98.3863% ( 1) 00:09:56.048 11475.380 - 11528.019: 98.3991% ( 2) 00:09:56.048 11528.019 - 11580.659: 98.4119% ( 2) 00:09:56.048 11580.659 - 11633.298: 98.4247% ( 2) 00:09:56.048 11633.298 - 11685.937: 98.4503% ( 4) 00:09:56.048 11685.937 - 11738.577: 98.4823% ( 5) 00:09:56.048 11738.577 - 11791.216: 98.5592% ( 12) 00:09:56.048 11791.216 - 11843.855: 98.6552% ( 15) 00:09:56.048 11843.855 - 11896.495: 98.7769% ( 19) 00:09:56.048 11896.495 - 11949.134: 98.8409% ( 10) 00:09:56.048 11949.134 - 12001.773: 98.9178% ( 12) 00:09:56.048 12001.773 - 12054.413: 98.9626% ( 7) 00:09:56.048 12054.413 - 12107.052: 99.0202% ( 9) 00:09:56.048 12107.052 - 12159.692: 99.0330% ( 2) 00:09:56.048 12159.692 - 12212.331: 99.0459% ( 2) 00:09:56.048 12212.331 - 12264.970: 99.0523% ( 1) 00:09:56.048 12264.970 - 12317.610: 99.0651% ( 2) 00:09:56.048 12317.610 - 12370.249: 99.0779% ( 2) 00:09:56.048 12370.249 - 12422.888: 99.0843% ( 1) 00:09:56.048 12422.888 - 12475.528: 99.0971% ( 2) 00:09:56.048 12475.528 - 12528.167: 99.1035% ( 1) 00:09:56.048 12528.167 - 12580.806: 99.1163% ( 2) 00:09:56.048 12580.806 - 12633.446: 99.1291% ( 2) 00:09:56.048 12633.446 - 12686.085: 99.1355% ( 1) 00:09:56.048 12686.085 - 12738.724: 99.1483% ( 2) 00:09:56.048 12738.724 - 12791.364: 99.1611% ( 2) 00:09:56.048 12791.364 - 12844.003: 99.1675% ( 1) 00:09:56.048 12844.003 - 12896.643: 99.1803% ( 2) 00:09:56.048 32846.959 - 33057.516: 99.1867% ( 1) 00:09:56.048 33057.516 - 33268.074: 99.2572% ( 11) 00:09:56.048 33268.074 - 33478.631: 99.3276% ( 11) 00:09:56.048 33478.631 - 33689.189: 99.4173% ( 14) 00:09:56.048 33689.189 - 33899.746: 99.5069% ( 14) 00:09:56.048 33899.746 - 34110.304: 99.5774% ( 11) 00:09:56.048 34110.304 - 34320.861: 99.6542% ( 12) 00:09:56.048 34320.861 - 34531.418: 99.7439% ( 14) 00:09:56.048 34531.418 - 34741.976: 99.7759% ( 5) 00:09:56.048 34741.976 - 34952.533: 99.8207% ( 7) 00:09:56.048 34952.533 - 35163.091: 99.8591% ( 6) 00:09:56.048 35163.091 - 35373.648: 99.8975% ( 6) 00:09:56.048 35373.648 - 35584.206: 99.9360% ( 6) 00:09:56.048 35584.206 - 35794.763: 99.9808% ( 7) 00:09:56.048 35794.763 - 36005.320: 100.0000% ( 3) 00:09:56.048 00:09:56.048 17:56:12 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:56.048 00:09:56.048 real 0m2.570s 00:09:56.048 user 0m2.227s 00:09:56.048 sys 0m0.236s 00:09:56.048 ************************************ 00:09:56.048 END TEST nvme_perf 00:09:56.048 ************************************ 00:09:56.048 17:56:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.048 17:56:12 -- common/autotest_common.sh@10 -- # set +x 00:09:56.048 17:56:12 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:56.048 17:56:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:56.048 17:56:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.048 17:56:12 -- common/autotest_common.sh@10 -- # set +x 00:09:56.048 ************************************ 00:09:56.048 START TEST nvme_hello_world 00:09:56.048 ************************************ 00:09:56.048 17:56:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:56.307 Initializing NVMe Controllers 00:09:56.307 Attached to 0000:00:09.0 00:09:56.307 Namespace ID: 1 size: 1GB 00:09:56.307 Attached to 0000:00:06.0 00:09:56.307 Namespace ID: 1 size: 6GB 00:09:56.307 Attached to 0000:00:07.0 00:09:56.307 Namespace ID: 1 size: 5GB 00:09:56.307 Attached to 0000:00:08.0 00:09:56.307 Namespace ID: 1 size: 4GB 00:09:56.307 Namespace ID: 2 size: 4GB 00:09:56.307 Namespace ID: 3 size: 4GB 00:09:56.307 Initialization complete. 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 INFO: using host memory buffer for IO 00:09:56.307 Hello world! 00:09:56.307 00:09:56.307 real 0m0.256s 00:09:56.307 user 0m0.086s 00:09:56.307 sys 0m0.114s 00:09:56.307 17:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.307 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.307 ************************************ 00:09:56.307 END TEST nvme_hello_world 00:09:56.307 ************************************ 00:09:56.307 17:56:13 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:56.307 17:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.307 17:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.307 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.307 ************************************ 00:09:56.307 START TEST nvme_sgl 00:09:56.307 ************************************ 00:09:56.307 17:56:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:56.565 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:56.565 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:56.565 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:56.565 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:56.565 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:56.566 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:56.566 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:56.566 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:56.566 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:56.566 NVMe Readv/Writev Request test 00:09:56.566 Attached to 0000:00:09.0 00:09:56.566 Attached to 0000:00:06.0 00:09:56.566 Attached to 0000:00:07.0 00:09:56.566 Attached to 0000:00:08.0 00:09:56.566 0000:00:06.0: build_io_request_2 test passed 00:09:56.566 0000:00:06.0: build_io_request_4 test passed 00:09:56.566 0000:00:06.0: build_io_request_5 test passed 00:09:56.566 0000:00:06.0: build_io_request_6 test passed 00:09:56.566 0000:00:06.0: build_io_request_7 test passed 00:09:56.566 0000:00:06.0: build_io_request_10 test passed 00:09:56.566 0000:00:07.0: build_io_request_2 test passed 00:09:56.566 0000:00:07.0: build_io_request_4 test passed 00:09:56.566 0000:00:07.0: build_io_request_5 test passed 00:09:56.566 0000:00:07.0: build_io_request_6 test passed 00:09:56.566 0000:00:07.0: build_io_request_7 test passed 00:09:56.566 0000:00:07.0: build_io_request_10 test passed 00:09:56.566 Cleaning up... 00:09:56.566 00:09:56.566 real 0m0.310s 00:09:56.566 user 0m0.131s 00:09:56.566 sys 0m0.136s 00:09:56.566 17:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:56.566 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.566 ************************************ 00:09:56.566 END TEST nvme_sgl 00:09:56.566 ************************************ 00:09:56.566 17:56:13 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:56.566 17:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:56.566 17:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.566 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 ************************************ 00:09:56.824 START TEST nvme_e2edp 00:09:56.824 ************************************ 00:09:56.824 17:56:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:56.824 NVMe Write/Read with End-to-End data protection test 00:09:56.824 Attached to 0000:00:09.0 00:09:56.824 Attached to 0000:00:06.0 00:09:56.824 Attached to 0000:00:07.0 00:09:56.824 Attached to 0000:00:08.0 00:09:56.824 Cleaning up... 00:09:57.083 00:09:57.083 real 0m0.255s 00:09:57.083 user 0m0.090s 00:09:57.083 sys 0m0.120s 00:09:57.083 17:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.083 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:57.083 ************************************ 00:09:57.083 END TEST nvme_e2edp 00:09:57.083 ************************************ 00:09:57.083 17:56:13 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:57.083 17:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:57.083 17:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.083 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:09:57.083 ************************************ 00:09:57.083 START TEST nvme_reserve 00:09:57.083 ************************************ 00:09:57.083 17:56:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:57.341 ===================================================== 00:09:57.341 NVMe Controller at PCI bus 0, device 9, function 0 00:09:57.341 ===================================================== 00:09:57.341 Reservations: Not Supported 00:09:57.341 ===================================================== 00:09:57.341 NVMe Controller at PCI bus 0, device 6, function 0 00:09:57.341 ===================================================== 00:09:57.341 Reservations: Not Supported 00:09:57.341 ===================================================== 00:09:57.341 NVMe Controller at PCI bus 0, device 7, function 0 00:09:57.341 ===================================================== 00:09:57.341 Reservations: Not Supported 00:09:57.341 ===================================================== 00:09:57.341 NVMe Controller at PCI bus 0, device 8, function 0 00:09:57.341 ===================================================== 00:09:57.341 Reservations: Not Supported 00:09:57.341 Reservation test passed 00:09:57.341 00:09:57.341 real 0m0.247s 00:09:57.341 user 0m0.081s 00:09:57.341 sys 0m0.117s 00:09:57.341 17:56:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.341 17:56:14 -- common/autotest_common.sh@10 -- # set +x 00:09:57.341 ************************************ 00:09:57.341 END TEST nvme_reserve 00:09:57.341 ************************************ 00:09:57.341 17:56:14 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:57.341 17:56:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:57.341 17:56:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.341 17:56:14 -- common/autotest_common.sh@10 -- # set +x 00:09:57.341 ************************************ 00:09:57.341 START TEST nvme_err_injection 00:09:57.341 ************************************ 00:09:57.341 17:56:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:57.599 NVMe Error Injection test 00:09:57.599 Attached to 0000:00:09.0 00:09:57.599 Attached to 0000:00:06.0 00:09:57.599 Attached to 0000:00:07.0 00:09:57.599 Attached to 0000:00:08.0 00:09:57.599 0000:00:08.0: get features failed as expected 00:09:57.599 0000:00:09.0: get features failed as expected 00:09:57.599 0000:00:06.0: get features failed as expected 00:09:57.599 0000:00:07.0: get features failed as expected 00:09:57.599 0000:00:09.0: get features successfully as expected 00:09:57.599 0000:00:06.0: get features successfully as expected 00:09:57.599 0000:00:07.0: get features successfully as expected 00:09:57.599 0000:00:08.0: get features successfully as expected 00:09:57.599 0000:00:07.0: read failed as expected 00:09:57.599 0000:00:09.0: read failed as expected 00:09:57.600 0000:00:06.0: read failed as expected 00:09:57.600 0000:00:08.0: read failed as expected 00:09:57.600 0000:00:09.0: read successfully as expected 00:09:57.600 0000:00:06.0: read successfully as expected 00:09:57.600 0000:00:07.0: read successfully as expected 00:09:57.600 0000:00:08.0: read successfully as expected 00:09:57.600 Cleaning up... 00:09:57.600 00:09:57.600 real 0m0.268s 00:09:57.600 user 0m0.089s 00:09:57.600 sys 0m0.129s 00:09:57.600 17:56:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.600 17:56:14 -- common/autotest_common.sh@10 -- # set +x 00:09:57.600 ************************************ 00:09:57.600 END TEST nvme_err_injection 00:09:57.600 ************************************ 00:09:57.600 17:56:14 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:57.600 17:56:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:57.600 17:56:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.600 17:56:14 -- common/autotest_common.sh@10 -- # set +x 00:09:57.600 ************************************ 00:09:57.600 START TEST nvme_overhead 00:09:57.600 ************************************ 00:09:57.600 17:56:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:58.975 Initializing NVMe Controllers 00:09:58.975 Attached to 0000:00:09.0 00:09:58.975 Attached to 0000:00:06.0 00:09:58.975 Attached to 0000:00:07.0 00:09:58.975 Attached to 0000:00:08.0 00:09:58.975 Initialization complete. Launching workers. 00:09:58.975 submit (in ns) avg, min, max = 13326.3, 10531.7, 73652.2 00:09:58.975 complete (in ns) avg, min, max = 8876.9, 7873.1, 46975.1 00:09:58.975 00:09:58.975 Submit histogram 00:09:58.975 ================ 00:09:58.975 Range in us Cumulative Count 00:09:58.975 10.487 - 10.538: 0.0163% ( 1) 00:09:58.975 10.847 - 10.898: 0.0327% ( 1) 00:09:58.975 11.001 - 11.052: 0.0490% ( 1) 00:09:58.975 11.104 - 11.155: 0.0654% ( 1) 00:09:58.975 11.206 - 11.258: 0.0981% ( 2) 00:09:58.975 11.412 - 11.463: 0.1144% ( 1) 00:09:58.975 11.463 - 11.515: 0.1308% ( 1) 00:09:58.975 11.618 - 11.669: 0.1471% ( 1) 00:09:58.975 11.875 - 11.926: 0.1635% ( 1) 00:09:58.975 11.926 - 11.978: 0.1962% ( 2) 00:09:58.975 11.978 - 12.029: 0.4741% ( 17) 00:09:58.975 12.029 - 12.080: 0.9972% ( 32) 00:09:58.975 12.080 - 12.132: 2.5176% ( 93) 00:09:58.975 12.132 - 12.183: 4.6918% ( 133) 00:09:58.975 12.183 - 12.235: 7.4219% ( 167) 00:09:58.975 12.235 - 12.286: 10.0703% ( 162) 00:09:58.975 12.286 - 12.337: 12.6533% ( 158) 00:09:58.975 12.337 - 12.389: 15.0074% ( 144) 00:09:58.975 12.389 - 12.440: 16.9364% ( 118) 00:09:58.975 12.440 - 12.492: 19.0289% ( 128) 00:09:58.975 12.492 - 12.543: 20.8272% ( 110) 00:09:58.975 12.543 - 12.594: 22.2985% ( 90) 00:09:58.975 12.594 - 12.646: 24.1458% ( 113) 00:09:58.975 12.646 - 12.697: 25.7970% ( 101) 00:09:58.975 12.697 - 12.749: 27.4481% ( 101) 00:09:58.975 12.749 - 12.800: 29.3281% ( 115) 00:09:58.975 12.800 - 12.851: 31.9601% ( 161) 00:09:58.975 12.851 - 12.903: 35.6384% ( 225) 00:09:58.975 12.903 - 12.954: 39.9215% ( 262) 00:09:58.975 12.954 - 13.006: 44.5480% ( 283) 00:09:58.975 13.006 - 13.057: 50.1717% ( 344) 00:09:58.975 13.057 - 13.108: 56.5310% ( 389) 00:09:58.975 13.108 - 13.160: 62.4653% ( 363) 00:09:58.975 13.160 - 13.263: 73.0914% ( 650) 00:09:58.975 13.263 - 13.365: 79.9575% ( 420) 00:09:58.975 13.365 - 13.468: 84.8292% ( 298) 00:09:58.975 13.468 - 13.571: 87.8045% ( 182) 00:09:58.975 13.571 - 13.674: 89.6681% ( 114) 00:09:58.975 13.674 - 13.777: 90.6817% ( 62) 00:09:58.975 13.777 - 13.880: 91.2866% ( 37) 00:09:58.976 13.880 - 13.982: 91.7934% ( 31) 00:09:58.976 13.982 - 14.085: 92.0876% ( 18) 00:09:58.976 14.085 - 14.188: 92.2838% ( 12) 00:09:58.976 14.188 - 14.291: 92.4636% ( 11) 00:09:58.976 14.291 - 14.394: 92.5127% ( 3) 00:09:58.976 14.394 - 14.496: 92.6271% ( 7) 00:09:58.976 14.496 - 14.599: 92.8069% ( 11) 00:09:58.976 14.599 - 14.702: 92.9214% ( 7) 00:09:58.976 14.702 - 14.805: 93.0685% ( 9) 00:09:58.976 14.805 - 14.908: 93.0848% ( 1) 00:09:58.976 14.908 - 15.010: 93.1993% ( 7) 00:09:58.976 15.010 - 15.113: 93.3137% ( 7) 00:09:58.976 15.113 - 15.216: 93.3628% ( 3) 00:09:58.976 15.216 - 15.319: 93.4282% ( 4) 00:09:58.976 15.319 - 15.422: 93.5426% ( 7) 00:09:58.976 15.422 - 15.524: 93.6407% ( 6) 00:09:58.976 15.524 - 15.627: 93.7224% ( 5) 00:09:58.976 15.627 - 15.730: 93.9022% ( 11) 00:09:58.976 15.730 - 15.833: 93.9676% ( 4) 00:09:58.976 15.833 - 15.936: 94.0167% ( 3) 00:09:58.976 15.936 - 16.039: 94.0984% ( 5) 00:09:58.976 16.039 - 16.141: 94.1638% ( 4) 00:09:58.976 16.141 - 16.244: 94.2128% ( 3) 00:09:58.976 16.244 - 16.347: 94.2619% ( 3) 00:09:58.976 16.347 - 16.450: 94.3763% ( 7) 00:09:58.976 16.450 - 16.553: 94.4744% ( 6) 00:09:58.976 16.553 - 16.655: 94.5235% ( 3) 00:09:58.976 16.655 - 16.758: 94.6542% ( 8) 00:09:58.976 16.758 - 16.861: 94.7687% ( 7) 00:09:58.976 16.861 - 16.964: 94.8831% ( 7) 00:09:58.976 16.964 - 17.067: 95.1447% ( 16) 00:09:58.976 17.067 - 17.169: 95.2755% ( 8) 00:09:58.976 17.169 - 17.272: 95.4553% ( 11) 00:09:58.976 17.272 - 17.375: 95.7332% ( 17) 00:09:58.976 17.375 - 17.478: 96.0275% ( 18) 00:09:58.976 17.478 - 17.581: 96.1909% ( 10) 00:09:58.976 17.581 - 17.684: 96.4362% ( 15) 00:09:58.976 17.684 - 17.786: 96.7304% ( 18) 00:09:58.976 17.786 - 17.889: 96.9756% ( 15) 00:09:58.976 17.889 - 17.992: 97.0901% ( 7) 00:09:58.976 17.992 - 18.095: 97.1718% ( 5) 00:09:58.976 18.095 - 18.198: 97.4007% ( 14) 00:09:58.976 18.198 - 18.300: 97.4824% ( 5) 00:09:58.976 18.300 - 18.403: 97.6459% ( 10) 00:09:58.976 18.403 - 18.506: 97.7930% ( 9) 00:09:58.976 18.506 - 18.609: 97.9565% ( 10) 00:09:58.976 18.609 - 18.712: 98.0219% ( 4) 00:09:58.976 18.712 - 18.814: 98.1363% ( 7) 00:09:58.976 18.814 - 18.917: 98.2181% ( 5) 00:09:58.976 18.917 - 19.020: 98.3489% ( 8) 00:09:58.976 19.020 - 19.123: 98.3979% ( 3) 00:09:58.976 19.123 - 19.226: 98.4796% ( 5) 00:09:58.976 19.226 - 19.329: 98.5450% ( 4) 00:09:58.976 19.329 - 19.431: 98.6758% ( 8) 00:09:58.976 19.431 - 19.534: 98.7576% ( 5) 00:09:58.976 19.534 - 19.637: 98.7903% ( 2) 00:09:58.976 19.637 - 19.740: 98.8556% ( 4) 00:09:58.976 19.740 - 19.843: 98.9210% ( 4) 00:09:58.976 19.843 - 19.945: 99.0191% ( 6) 00:09:58.976 19.945 - 20.048: 99.1172% ( 6) 00:09:58.976 20.048 - 20.151: 99.1826% ( 4) 00:09:58.976 20.151 - 20.254: 99.2807% ( 6) 00:09:58.976 20.254 - 20.357: 99.3297% ( 3) 00:09:58.976 20.357 - 20.459: 99.3624% ( 2) 00:09:58.976 20.459 - 20.562: 99.4115% ( 3) 00:09:58.976 20.665 - 20.768: 99.4605% ( 3) 00:09:58.976 20.768 - 20.871: 99.4769% ( 1) 00:09:58.976 20.871 - 20.973: 99.5096% ( 2) 00:09:58.976 20.973 - 21.076: 99.5423% ( 2) 00:09:58.976 21.179 - 21.282: 99.5750% ( 2) 00:09:58.976 21.693 - 21.796: 99.5913% ( 1) 00:09:58.976 21.796 - 21.899: 99.6077% ( 1) 00:09:58.976 22.002 - 22.104: 99.6403% ( 2) 00:09:58.976 22.104 - 22.207: 99.6567% ( 1) 00:09:58.976 22.207 - 22.310: 99.6730% ( 1) 00:09:58.976 22.618 - 22.721: 99.6894% ( 1) 00:09:58.976 22.721 - 22.824: 99.7057% ( 1) 00:09:58.976 22.824 - 22.927: 99.7221% ( 1) 00:09:58.976 23.133 - 23.235: 99.7875% ( 4) 00:09:58.976 23.338 - 23.441: 99.8038% ( 1) 00:09:58.976 24.058 - 24.161: 99.8202% ( 1) 00:09:58.976 24.880 - 24.983: 99.8365% ( 1) 00:09:58.976 25.703 - 25.806: 99.8529% ( 1) 00:09:58.976 26.011 - 26.114: 99.8692% ( 1) 00:09:58.976 26.937 - 27.142: 99.8856% ( 1) 00:09:58.976 36.190 - 36.395: 99.9183% ( 2) 00:09:58.976 36.395 - 36.601: 99.9346% ( 1) 00:09:58.976 42.358 - 42.564: 99.9510% ( 1) 00:09:58.976 47.910 - 48.116: 99.9673% ( 1) 00:09:58.976 50.172 - 50.378: 99.9837% ( 1) 00:09:58.976 73.613 - 74.024: 100.0000% ( 1) 00:09:58.976 00:09:58.976 Complete histogram 00:09:58.976 ================== 00:09:58.976 Range in us Cumulative Count 00:09:58.976 7.865 - 7.916: 0.6866% ( 42) 00:09:58.976 7.916 - 7.968: 7.9941% ( 447) 00:09:58.976 7.968 - 8.019: 23.2140% ( 931) 00:09:58.976 8.019 - 8.071: 37.0770% ( 848) 00:09:58.976 8.071 - 8.122: 47.5069% ( 638) 00:09:58.976 8.122 - 8.173: 54.1442% ( 406) 00:09:58.976 8.173 - 8.225: 58.3783% ( 259) 00:09:58.976 8.225 - 8.276: 60.7487% ( 145) 00:09:58.976 8.276 - 8.328: 62.1873% ( 88) 00:09:58.976 8.328 - 8.379: 62.8086% ( 38) 00:09:58.976 8.379 - 8.431: 63.3480% ( 33) 00:09:58.976 8.431 - 8.482: 63.5442% ( 12) 00:09:58.976 8.482 - 8.533: 63.6260% ( 5) 00:09:58.976 8.533 - 8.585: 63.7077% ( 5) 00:09:58.976 8.585 - 8.636: 63.8058% ( 6) 00:09:58.976 8.636 - 8.688: 63.8875% ( 5) 00:09:58.976 8.688 - 8.739: 64.0674% ( 11) 00:09:58.976 8.739 - 8.790: 64.2635% ( 12) 00:09:58.976 8.790 - 8.842: 64.9828% ( 44) 00:09:58.976 8.842 - 8.893: 67.7620% ( 170) 00:09:58.976 8.893 - 8.945: 72.0288% ( 261) 00:09:58.976 8.945 - 8.996: 75.5109% ( 213) 00:09:58.976 8.996 - 9.047: 78.3227% ( 172) 00:09:58.976 9.047 - 9.099: 80.8893% ( 157) 00:09:58.976 9.099 - 9.150: 83.5867% ( 165) 00:09:58.976 9.150 - 9.202: 86.3659% ( 170) 00:09:58.976 9.202 - 9.253: 88.6873% ( 142) 00:09:58.976 9.253 - 9.304: 90.1749% ( 91) 00:09:58.976 9.304 - 9.356: 91.3193% ( 70) 00:09:58.976 9.356 - 9.407: 92.2675% ( 58) 00:09:58.976 9.407 - 9.459: 92.9868% ( 44) 00:09:58.976 9.459 - 9.510: 93.3955% ( 25) 00:09:58.976 9.510 - 9.561: 93.9349% ( 33) 00:09:58.976 9.561 - 9.613: 94.1965% ( 16) 00:09:58.976 9.613 - 9.664: 94.3927% ( 12) 00:09:58.976 9.664 - 9.716: 94.6379% ( 15) 00:09:58.976 9.716 - 9.767: 94.7687% ( 8) 00:09:58.976 9.767 - 9.818: 94.8504% ( 5) 00:09:58.976 9.818 - 9.870: 94.9812% ( 8) 00:09:58.976 9.870 - 9.921: 95.0466% ( 4) 00:09:58.976 9.921 - 9.973: 95.0956% ( 3) 00:09:58.976 9.973 - 10.024: 95.2101% ( 7) 00:09:58.976 10.024 - 10.076: 95.2264% ( 1) 00:09:58.976 10.076 - 10.127: 95.2428% ( 1) 00:09:58.976 10.127 - 10.178: 95.2918% ( 3) 00:09:58.976 10.178 - 10.230: 95.3082% ( 1) 00:09:58.976 10.333 - 10.384: 95.3245% ( 1) 00:09:58.976 10.384 - 10.435: 95.3735% ( 3) 00:09:58.976 10.538 - 10.590: 95.4062% ( 2) 00:09:58.976 10.795 - 10.847: 95.4389% ( 2) 00:09:58.976 11.001 - 11.052: 95.4553% ( 1) 00:09:58.976 11.052 - 11.104: 95.4716% ( 1) 00:09:58.976 11.155 - 11.206: 95.4880% ( 1) 00:09:58.976 11.361 - 11.412: 95.5043% ( 1) 00:09:58.976 11.463 - 11.515: 95.5207% ( 1) 00:09:58.976 11.669 - 11.720: 95.5370% ( 1) 00:09:58.976 11.823 - 11.875: 95.5534% ( 1) 00:09:58.976 12.029 - 12.080: 95.5697% ( 1) 00:09:58.976 12.132 - 12.183: 95.5861% ( 1) 00:09:58.976 12.697 - 12.749: 95.6024% ( 1) 00:09:58.976 13.108 - 13.160: 95.6188% ( 1) 00:09:58.976 13.160 - 13.263: 95.7005% ( 5) 00:09:58.976 13.263 - 13.365: 95.7332% ( 2) 00:09:58.976 13.365 - 13.468: 95.8149% ( 5) 00:09:58.976 13.468 - 13.571: 95.9130% ( 6) 00:09:58.976 13.571 - 13.674: 95.9457% ( 2) 00:09:58.976 13.674 - 13.777: 96.0275% ( 5) 00:09:58.976 13.777 - 13.880: 96.0765% ( 3) 00:09:58.976 13.880 - 13.982: 96.1256% ( 3) 00:09:58.976 13.982 - 14.085: 96.1909% ( 4) 00:09:58.976 14.085 - 14.188: 96.2727% ( 5) 00:09:58.976 14.394 - 14.496: 96.3217% ( 3) 00:09:58.976 14.496 - 14.599: 96.4198% ( 6) 00:09:58.976 14.599 - 14.702: 96.4689% ( 3) 00:09:58.976 14.702 - 14.805: 96.5016% ( 2) 00:09:58.976 14.805 - 14.908: 96.5833% ( 5) 00:09:58.976 14.908 - 15.010: 96.6323% ( 3) 00:09:58.976 15.113 - 15.216: 96.6487% ( 1) 00:09:58.976 15.216 - 15.319: 96.6650% ( 1) 00:09:58.976 15.319 - 15.422: 96.6814% ( 1) 00:09:58.976 15.833 - 15.936: 96.6977% ( 1) 00:09:58.976 15.936 - 16.039: 96.7141% ( 1) 00:09:58.976 16.244 - 16.347: 96.7304% ( 1) 00:09:58.976 16.450 - 16.553: 96.7468% ( 1) 00:09:58.976 16.553 - 16.655: 96.7631% ( 1) 00:09:58.976 16.655 - 16.758: 96.7795% ( 1) 00:09:58.976 16.758 - 16.861: 96.7958% ( 1) 00:09:58.976 16.861 - 16.964: 96.8122% ( 1) 00:09:58.976 17.067 - 17.169: 96.8285% ( 1) 00:09:58.976 17.375 - 17.478: 96.8449% ( 1) 00:09:58.976 17.478 - 17.581: 96.8776% ( 2) 00:09:58.976 17.581 - 17.684: 96.9103% ( 2) 00:09:58.976 17.684 - 17.786: 96.9756% ( 4) 00:09:58.976 17.786 - 17.889: 96.9920% ( 1) 00:09:58.976 17.889 - 17.992: 97.0574% ( 4) 00:09:58.977 17.992 - 18.095: 97.1228% ( 4) 00:09:58.977 18.095 - 18.198: 97.1882% ( 4) 00:09:58.977 18.198 - 18.300: 97.2699% ( 5) 00:09:58.977 18.300 - 18.403: 97.3353% ( 4) 00:09:58.977 18.403 - 18.506: 97.4334% ( 6) 00:09:58.977 18.506 - 18.609: 97.5478% ( 7) 00:09:58.977 18.609 - 18.712: 97.6296% ( 5) 00:09:58.977 18.712 - 18.814: 97.7276% ( 6) 00:09:58.977 18.814 - 18.917: 97.8911% ( 10) 00:09:58.977 18.917 - 19.020: 97.9402% ( 3) 00:09:58.977 19.020 - 19.123: 98.0546% ( 7) 00:09:58.977 19.123 - 19.226: 98.1200% ( 4) 00:09:58.977 19.226 - 19.329: 98.2998% ( 11) 00:09:58.977 19.329 - 19.431: 98.3652% ( 4) 00:09:58.977 19.431 - 19.534: 98.4470% ( 5) 00:09:58.977 19.534 - 19.637: 98.4960% ( 3) 00:09:58.977 19.637 - 19.740: 98.6595% ( 10) 00:09:58.977 19.740 - 19.843: 98.7412% ( 5) 00:09:58.977 19.843 - 19.945: 98.7576% ( 1) 00:09:58.977 19.945 - 20.048: 98.8556% ( 6) 00:09:58.977 20.048 - 20.151: 98.9537% ( 6) 00:09:58.977 20.151 - 20.254: 99.0191% ( 4) 00:09:58.977 20.254 - 20.357: 99.0518% ( 2) 00:09:58.977 20.357 - 20.459: 99.1172% ( 4) 00:09:58.977 20.459 - 20.562: 99.1990% ( 5) 00:09:58.977 20.562 - 20.665: 99.2480% ( 3) 00:09:58.977 20.665 - 20.768: 99.3134% ( 4) 00:09:58.977 20.768 - 20.871: 99.3461% ( 2) 00:09:58.977 20.871 - 20.973: 99.4278% ( 5) 00:09:58.977 20.973 - 21.076: 99.4605% ( 2) 00:09:58.977 21.076 - 21.179: 99.4932% ( 2) 00:09:58.977 21.179 - 21.282: 99.5259% ( 2) 00:09:58.977 21.282 - 21.385: 99.5423% ( 1) 00:09:58.977 21.385 - 21.488: 99.5750% ( 2) 00:09:58.977 21.488 - 21.590: 99.5913% ( 1) 00:09:58.977 21.590 - 21.693: 99.6240% ( 2) 00:09:58.977 21.693 - 21.796: 99.6730% ( 3) 00:09:58.977 21.796 - 21.899: 99.6894% ( 1) 00:09:58.977 21.899 - 22.002: 99.7221% ( 2) 00:09:58.977 22.002 - 22.104: 99.7384% ( 1) 00:09:58.977 22.721 - 22.824: 99.7548% ( 1) 00:09:58.977 22.927 - 23.030: 99.7711% ( 1) 00:09:58.977 24.263 - 24.366: 99.7875% ( 1) 00:09:58.977 24.675 - 24.778: 99.8038% ( 1) 00:09:58.977 25.806 - 25.908: 99.8202% ( 1) 00:09:58.977 26.011 - 26.114: 99.8365% ( 1) 00:09:58.977 26.114 - 26.217: 99.8529% ( 1) 00:09:58.977 26.731 - 26.937: 99.8692% ( 1) 00:09:58.977 29.198 - 29.404: 99.8856% ( 1) 00:09:58.977 29.815 - 30.021: 99.9019% ( 1) 00:09:58.977 30.843 - 31.049: 99.9183% ( 1) 00:09:58.977 33.105 - 33.311: 99.9346% ( 1) 00:09:58.977 36.190 - 36.395: 99.9510% ( 1) 00:09:58.977 39.685 - 39.891: 99.9673% ( 1) 00:09:58.977 46.882 - 47.088: 100.0000% ( 2) 00:09:58.977 00:09:58.977 00:09:58.977 real 0m1.248s 00:09:58.977 user 0m1.070s 00:09:58.977 sys 0m0.131s 00:09:58.977 17:56:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.977 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:09:58.977 ************************************ 00:09:58.977 END TEST nvme_overhead 00:09:58.977 ************************************ 00:09:58.977 17:56:15 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:58.977 17:56:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:58.977 17:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.977 17:56:15 -- common/autotest_common.sh@10 -- # set +x 00:09:58.977 ************************************ 00:09:58.977 START TEST nvme_arbitration 00:09:58.977 ************************************ 00:09:58.977 17:56:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:02.265 Initializing NVMe Controllers 00:10:02.265 Attached to 0000:00:09.0 00:10:02.265 Attached to 0000:00:06.0 00:10:02.265 Attached to 0000:00:07.0 00:10:02.265 Attached to 0000:00:08.0 00:10:02.265 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:10:02.265 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:10:02.265 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:10:02.265 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:02.265 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:02.265 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:02.265 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:02.265 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:02.265 Initialization complete. Launching workers. 00:10:02.265 Starting thread on core 1 with urgent priority queue 00:10:02.265 Starting thread on core 2 with urgent priority queue 00:10:02.265 Starting thread on core 3 with urgent priority queue 00:10:02.265 Starting thread on core 0 with urgent priority queue 00:10:02.265 QEMU NVMe Ctrl (12343 ) core 0: 4714.67 IO/s 21.21 secs/100000 ios 00:10:02.265 QEMU NVMe Ctrl (12342 ) core 0: 4714.67 IO/s 21.21 secs/100000 ios 00:10:02.265 QEMU NVMe Ctrl (12340 ) core 1: 4370.67 IO/s 22.88 secs/100000 ios 00:10:02.265 QEMU NVMe Ctrl (12342 ) core 1: 4373.33 IO/s 22.87 secs/100000 ios 00:10:02.265 QEMU NVMe Ctrl (12341 ) core 2: 4544.00 IO/s 22.01 secs/100000 ios 00:10:02.265 QEMU NVMe Ctrl (12342 ) core 3: 4605.00 IO/s 21.72 secs/100000 ios 00:10:02.265 ======================================================== 00:10:02.265 00:10:02.265 00:10:02.265 real 0m3.308s 00:10:02.265 user 0m9.098s 00:10:02.265 sys 0m0.149s 00:10:02.265 17:56:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.265 17:56:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.265 ************************************ 00:10:02.265 END TEST nvme_arbitration 00:10:02.265 ************************************ 00:10:02.265 17:56:19 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:02.265 17:56:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:02.265 17:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.265 17:56:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.265 ************************************ 00:10:02.265 START TEST nvme_single_aen 00:10:02.265 ************************************ 00:10:02.265 17:56:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:10:02.524 [2024-11-26 17:56:19.220485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:02.524 [2024-11-26 17:56:19.220569] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.524 [2024-11-26 17:56:19.387628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:02.524 [2024-11-26 17:56:19.389167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:02.524 [2024-11-26 17:56:19.390429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:02.524 [2024-11-26 17:56:19.391632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:02.524 Asynchronous Event Request test 00:10:02.524 Attached to 0000:00:09.0 00:10:02.524 Attached to 0000:00:06.0 00:10:02.525 Attached to 0000:00:07.0 00:10:02.525 Attached to 0000:00:08.0 00:10:02.525 Reset controller to setup AER completions for this process 00:10:02.525 Registering asynchronous event callbacks... 00:10:02.525 Getting orig temperature thresholds of all controllers 00:10:02.525 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.525 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.525 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.525 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.525 Setting all controllers temperature threshold low to trigger AER 00:10:02.525 Waiting for all controllers temperature threshold to be set lower 00:10:02.525 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.525 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:02.525 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.525 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:02.525 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.525 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:02.525 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.525 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:02.525 Waiting for all controllers to trigger AER and reset threshold 00:10:02.525 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.525 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.525 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.525 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.525 Cleaning up... 00:10:02.525 00:10:02.525 real 0m0.260s 00:10:02.525 user 0m0.091s 00:10:02.525 sys 0m0.120s 00:10:02.525 17:56:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:02.525 17:56:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 ************************************ 00:10:02.525 END TEST nvme_single_aen 00:10:02.525 ************************************ 00:10:02.783 17:56:19 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:02.783 17:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:02.783 17:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.783 17:56:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.783 ************************************ 00:10:02.783 START TEST nvme_doorbell_aers 00:10:02.783 ************************************ 00:10:02.783 17:56:19 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:10:02.783 17:56:19 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:02.783 17:56:19 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:02.783 17:56:19 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:02.783 17:56:19 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:02.783 17:56:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:02.783 17:56:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:02.783 17:56:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.783 17:56:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.783 17:56:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:02.783 17:56:19 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:02.783 17:56:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:02.783 17:56:19 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:02.783 17:56:19 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:03.041 [2024-11-26 17:56:19.880178] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:13.015 Executing: test_write_invalid_db 00:10:13.015 Waiting for AER completion... 00:10:13.015 Failure: test_write_invalid_db 00:10:13.015 00:10:13.015 Executing: test_invalid_db_write_overflow_sq 00:10:13.015 Waiting for AER completion... 00:10:13.015 Failure: test_invalid_db_write_overflow_sq 00:10:13.015 00:10:13.015 Executing: test_invalid_db_write_overflow_cq 00:10:13.015 Waiting for AER completion... 00:10:13.015 Failure: test_invalid_db_write_overflow_cq 00:10:13.015 00:10:13.015 17:56:29 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:13.015 17:56:29 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:13.015 [2024-11-26 17:56:29.923054] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:23.022 Executing: test_write_invalid_db 00:10:23.022 Waiting for AER completion... 00:10:23.022 Failure: test_write_invalid_db 00:10:23.022 00:10:23.022 Executing: test_invalid_db_write_overflow_sq 00:10:23.022 Waiting for AER completion... 00:10:23.023 Failure: test_invalid_db_write_overflow_sq 00:10:23.023 00:10:23.023 Executing: test_invalid_db_write_overflow_cq 00:10:23.023 Waiting for AER completion... 00:10:23.023 Failure: test_invalid_db_write_overflow_cq 00:10:23.023 00:10:23.023 17:56:39 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:23.023 17:56:39 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:23.281 [2024-11-26 17:56:39.949953] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:33.261 Executing: test_write_invalid_db 00:10:33.261 Waiting for AER completion... 00:10:33.261 Failure: test_write_invalid_db 00:10:33.261 00:10:33.261 Executing: test_invalid_db_write_overflow_sq 00:10:33.261 Waiting for AER completion... 00:10:33.261 Failure: test_invalid_db_write_overflow_sq 00:10:33.261 00:10:33.261 Executing: test_invalid_db_write_overflow_cq 00:10:33.261 Waiting for AER completion... 00:10:33.261 Failure: test_invalid_db_write_overflow_cq 00:10:33.261 00:10:33.261 17:56:49 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:33.261 17:56:49 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:33.261 [2024-11-26 17:56:50.037002] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.245 Executing: test_write_invalid_db 00:10:43.245 Waiting for AER completion... 00:10:43.245 Failure: test_write_invalid_db 00:10:43.245 00:10:43.245 Executing: test_invalid_db_write_overflow_sq 00:10:43.245 Waiting for AER completion... 00:10:43.245 Failure: test_invalid_db_write_overflow_sq 00:10:43.245 00:10:43.245 Executing: test_invalid_db_write_overflow_cq 00:10:43.245 Waiting for AER completion... 00:10:43.245 Failure: test_invalid_db_write_overflow_cq 00:10:43.245 00:10:43.245 00:10:43.245 real 0m40.305s 00:10:43.245 user 0m28.321s 00:10:43.245 sys 0m11.616s 00:10:43.245 17:56:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.245 ************************************ 00:10:43.245 17:56:59 -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 END TEST nvme_doorbell_aers 00:10:43.245 ************************************ 00:10:43.245 17:56:59 -- nvme/nvme.sh@97 -- # uname 00:10:43.246 17:56:59 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:43.246 17:56:59 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:43.246 17:56:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:43.246 17:56:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.246 17:56:59 -- common/autotest_common.sh@10 -- # set +x 00:10:43.246 ************************************ 00:10:43.246 START TEST nvme_multi_aen 00:10:43.246 ************************************ 00:10:43.246 17:56:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:43.246 [2024-11-26 17:56:59.945796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:43.246 [2024-11-26 17:56:59.945883] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.246 [2024-11-26 17:57:00.101044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:43.246 [2024-11-26 17:57:00.101301] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.101471] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.101544] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.103174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:43.246 [2024-11-26 17:57:00.103310] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.103465] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.103536] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.105070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:43.246 [2024-11-26 17:57:00.105196] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.105321] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.105388] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.106780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:43.246 [2024-11-26 17:57:00.106892] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.107014] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.107073] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75648) is not found. Dropping the request. 00:10:43.246 [2024-11-26 17:57:00.124806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:43.246 [2024-11-26 17:57:00.125295] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 76165 00:10:43.246 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.507 [Child] Asynchronous Event Request test 00:10:43.507 [Child] Attached to 0000:00:09.0 00:10:43.507 [Child] Attached to 0000:00:06.0 00:10:43.507 [Child] Attached to 0000:00:07.0 00:10:43.507 [Child] Attached to 0000:00:08.0 00:10:43.507 [Child] Registering asynchronous event callbacks... 00:10:43.507 [Child] Getting orig temperature thresholds of all controllers 00:10:43.507 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:43.507 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 [Child] Cleaning up... 00:10:43.507 Asynchronous Event Request test 00:10:43.507 Attached to 0000:00:09.0 00:10:43.507 Attached to 0000:00:06.0 00:10:43.507 Attached to 0000:00:07.0 00:10:43.507 Attached to 0000:00:08.0 00:10:43.507 Reset controller to setup AER completions for this process 00:10:43.507 Registering asynchronous event callbacks... 00:10:43.507 Getting orig temperature thresholds of all controllers 00:10:43.507 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:43.507 Setting all controllers temperature threshold low to trigger AER 00:10:43.507 Waiting for all controllers temperature threshold to be set lower 00:10:43.507 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:43.507 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:43.507 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:43.507 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:43.507 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:43.507 Waiting for all controllers to trigger AER and reset threshold 00:10:43.507 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.507 Cleaning up... 00:10:43.507 00:10:43.507 real 0m0.508s 00:10:43.507 user 0m0.177s 00:10:43.507 sys 0m0.232s 00:10:43.507 17:57:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.507 17:57:00 -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 ************************************ 00:10:43.507 END TEST nvme_multi_aen 00:10:43.507 ************************************ 00:10:43.768 17:57:00 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:43.768 17:57:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:43.768 17:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.768 17:57:00 -- common/autotest_common.sh@10 -- # set +x 00:10:43.768 ************************************ 00:10:43.768 START TEST nvme_startup 00:10:43.768 ************************************ 00:10:43.768 17:57:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:44.028 Initializing NVMe Controllers 00:10:44.028 Attached to 0000:00:09.0 00:10:44.028 Attached to 0000:00:06.0 00:10:44.028 Attached to 0000:00:07.0 00:10:44.028 Attached to 0000:00:08.0 00:10:44.028 Initialization complete. 00:10:44.028 Time used:166920.672 (us). 00:10:44.028 ************************************ 00:10:44.028 END TEST nvme_startup 00:10:44.028 ************************************ 00:10:44.028 00:10:44.028 real 0m0.248s 00:10:44.028 user 0m0.078s 00:10:44.028 sys 0m0.127s 00:10:44.028 17:57:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.028 17:57:00 -- common/autotest_common.sh@10 -- # set +x 00:10:44.028 17:57:00 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:44.028 17:57:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:44.028 17:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.028 17:57:00 -- common/autotest_common.sh@10 -- # set +x 00:10:44.028 ************************************ 00:10:44.028 START TEST nvme_multi_secondary 00:10:44.028 ************************************ 00:10:44.028 17:57:00 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:44.028 17:57:00 -- nvme/nvme.sh@52 -- # pid0=76220 00:10:44.028 17:57:00 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:44.028 17:57:00 -- nvme/nvme.sh@54 -- # pid1=76221 00:10:44.028 17:57:00 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:44.028 17:57:00 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:47.320 Initializing NVMe Controllers 00:10:47.320 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:47.320 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:47.320 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:47.320 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:47.320 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:47.320 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:47.320 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:47.320 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:47.320 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:47.320 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:47.320 Initialization complete. Launching workers. 00:10:47.320 ======================================================== 00:10:47.320 Latency(us) 00:10:47.320 Device Information : IOPS MiB/s Average min max 00:10:47.320 PCIE (0000:00:09.0) NSID 1 from core 2: 2990.43 11.68 5349.54 1421.27 14333.81 00:10:47.320 PCIE (0000:00:06.0) NSID 1 from core 2: 2990.43 11.68 5348.69 1291.83 14165.42 00:10:47.320 PCIE (0000:00:07.0) NSID 1 from core 2: 2990.43 11.68 5350.20 1442.41 14743.96 00:10:47.320 PCIE (0000:00:08.0) NSID 1 from core 2: 2990.43 11.68 5350.21 1361.86 18754.08 00:10:47.320 PCIE (0000:00:08.0) NSID 2 from core 2: 2990.43 11.68 5349.22 1316.45 14626.40 00:10:47.320 PCIE (0000:00:08.0) NSID 3 from core 2: 2990.43 11.68 5350.16 1078.82 14431.83 00:10:47.320 ======================================================== 00:10:47.320 Total : 17942.58 70.09 5349.67 1078.82 18754.08 00:10:47.320 00:10:47.320 17:57:04 -- nvme/nvme.sh@56 -- # wait 76220 00:10:47.578 Initializing NVMe Controllers 00:10:47.578 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:47.578 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:47.578 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:47.578 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:47.578 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:47.578 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:47.578 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:47.578 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:47.578 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:47.578 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:47.578 Initialization complete. Launching workers. 00:10:47.578 ======================================================== 00:10:47.578 Latency(us) 00:10:47.578 Device Information : IOPS MiB/s Average min max 00:10:47.578 PCIE (0000:00:09.0) NSID 1 from core 1: 5084.17 19.86 3146.11 1380.19 6371.86 00:10:47.578 PCIE (0000:00:06.0) NSID 1 from core 1: 5084.17 19.86 3144.25 1347.98 6182.49 00:10:47.578 PCIE (0000:00:07.0) NSID 1 from core 1: 5084.17 19.86 3145.83 1513.34 6250.53 00:10:47.578 PCIE (0000:00:08.0) NSID 1 from core 1: 5084.17 19.86 3145.58 1287.01 6414.03 00:10:47.578 PCIE (0000:00:08.0) NSID 2 from core 1: 5084.17 19.86 3145.57 1309.72 6512.32 00:10:47.578 PCIE (0000:00:08.0) NSID 3 from core 1: 5084.17 19.86 3145.37 1315.66 6392.83 00:10:47.578 ======================================================== 00:10:47.578 Total : 30505.03 119.16 3145.45 1287.01 6512.32 00:10:47.578 00:10:49.481 Initializing NVMe Controllers 00:10:49.481 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:49.481 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:49.481 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:49.481 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:49.481 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:49.481 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:49.481 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:49.481 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:49.481 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:49.481 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:49.481 Initialization complete. Launching workers. 00:10:49.481 ======================================================== 00:10:49.481 Latency(us) 00:10:49.481 Device Information : IOPS MiB/s Average min max 00:10:49.481 PCIE (0000:00:09.0) NSID 1 from core 0: 8004.84 31.27 1998.26 976.85 7357.29 00:10:49.481 PCIE (0000:00:06.0) NSID 1 from core 0: 8004.84 31.27 1997.17 964.35 7634.24 00:10:49.481 PCIE (0000:00:07.0) NSID 1 from core 0: 8004.84 31.27 1998.24 980.53 7663.95 00:10:49.481 PCIE (0000:00:08.0) NSID 1 from core 0: 8004.84 31.27 1998.20 747.77 7316.77 00:10:49.481 PCIE (0000:00:08.0) NSID 2 from core 0: 8004.84 31.27 1998.15 621.84 7248.78 00:10:49.481 PCIE (0000:00:08.0) NSID 3 from core 0: 8004.84 31.27 1998.11 515.47 7653.55 00:10:49.481 ======================================================== 00:10:49.481 Total : 48029.05 187.61 1998.02 515.47 7663.95 00:10:49.481 00:10:49.481 17:57:06 -- nvme/nvme.sh@57 -- # wait 76221 00:10:49.481 17:57:06 -- nvme/nvme.sh@61 -- # pid0=76295 00:10:49.481 17:57:06 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:49.481 17:57:06 -- nvme/nvme.sh@63 -- # pid1=76296 00:10:49.481 17:57:06 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:49.481 17:57:06 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:52.774 Initializing NVMe Controllers 00:10:52.774 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:52.774 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:52.774 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:52.774 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:52.774 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:52.774 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:52.774 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:52.774 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:52.774 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:52.774 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:52.774 Initialization complete. Launching workers. 00:10:52.774 ======================================================== 00:10:52.774 Latency(us) 00:10:52.774 Device Information : IOPS MiB/s Average min max 00:10:52.774 PCIE (0000:00:09.0) NSID 1 from core 0: 4738.03 18.51 3376.08 1159.85 6500.69 00:10:52.774 PCIE (0000:00:06.0) NSID 1 from core 0: 4738.03 18.51 3374.40 1114.06 6504.61 00:10:52.774 PCIE (0000:00:07.0) NSID 1 from core 0: 4738.03 18.51 3376.89 1139.28 6873.92 00:10:52.774 PCIE (0000:00:08.0) NSID 1 from core 0: 4738.03 18.51 3376.95 1132.57 7214.03 00:10:52.774 PCIE (0000:00:08.0) NSID 2 from core 0: 4738.03 18.51 3377.16 1149.71 6833.35 00:10:52.774 PCIE (0000:00:08.0) NSID 3 from core 0: 4738.03 18.51 3377.37 1152.22 6328.12 00:10:52.774 ======================================================== 00:10:52.774 Total : 28428.16 111.05 3376.48 1114.06 7214.03 00:10:52.774 00:10:53.032 Initializing NVMe Controllers 00:10:53.032 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:53.032 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:53.032 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:53.032 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:53.032 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:53.032 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:53.032 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:53.032 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:53.032 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:53.032 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:53.032 Initialization complete. Launching workers. 00:10:53.032 ======================================================== 00:10:53.032 Latency(us) 00:10:53.032 Device Information : IOPS MiB/s Average min max 00:10:53.032 PCIE (0000:00:09.0) NSID 1 from core 1: 4767.93 18.62 3354.79 1101.82 6855.00 00:10:53.033 PCIE (0000:00:06.0) NSID 1 from core 1: 4767.93 18.62 3352.79 1043.45 6488.96 00:10:53.033 PCIE (0000:00:07.0) NSID 1 from core 1: 4767.93 18.62 3354.60 1070.18 6040.24 00:10:53.033 PCIE (0000:00:08.0) NSID 1 from core 1: 4767.93 18.62 3354.41 818.10 6257.11 00:10:53.033 PCIE (0000:00:08.0) NSID 2 from core 1: 4767.93 18.62 3354.19 678.08 6296.55 00:10:53.033 PCIE (0000:00:08.0) NSID 3 from core 1: 4767.93 18.62 3353.98 568.30 6842.60 00:10:53.033 ======================================================== 00:10:53.033 Total : 28607.58 111.75 3354.13 568.30 6855.00 00:10:53.033 00:10:54.940 Initializing NVMe Controllers 00:10:54.940 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:54.940 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:54.940 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:54.940 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:54.940 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:54.940 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:54.940 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:54.940 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:54.940 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:54.940 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:54.940 Initialization complete. Launching workers. 00:10:54.940 ======================================================== 00:10:54.940 Latency(us) 00:10:54.940 Device Information : IOPS MiB/s Average min max 00:10:54.940 PCIE (0000:00:09.0) NSID 1 from core 2: 3202.00 12.51 4996.50 1190.15 13057.86 00:10:54.940 PCIE (0000:00:06.0) NSID 1 from core 2: 3202.00 12.51 4994.86 1152.38 13386.03 00:10:54.940 PCIE (0000:00:07.0) NSID 1 from core 2: 3202.00 12.51 4996.39 1049.37 12900.89 00:10:54.940 PCIE (0000:00:08.0) NSID 1 from core 2: 3202.00 12.51 4996.08 1171.04 13412.91 00:10:54.940 PCIE (0000:00:08.0) NSID 2 from core 2: 3202.00 12.51 4995.74 1193.71 11585.25 00:10:54.940 PCIE (0000:00:08.0) NSID 3 from core 2: 3202.00 12.51 4996.16 1194.43 11436.37 00:10:54.940 ======================================================== 00:10:54.940 Total : 19212.00 75.05 4995.95 1049.37 13412.91 00:10:54.940 00:10:54.940 ************************************ 00:10:54.940 END TEST nvme_multi_secondary 00:10:54.940 ************************************ 00:10:54.940 17:57:11 -- nvme/nvme.sh@65 -- # wait 76295 00:10:54.940 17:57:11 -- nvme/nvme.sh@66 -- # wait 76296 00:10:54.940 00:10:54.940 real 0m10.727s 00:10:54.940 user 0m18.420s 00:10:54.940 sys 0m0.843s 00:10:54.940 17:57:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:54.940 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 17:57:11 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:54.940 17:57:11 -- nvme/nvme.sh@102 -- # kill_stub 00:10:54.940 17:57:11 -- common/autotest_common.sh@1075 -- # [[ -e /proc/75225 ]] 00:10:54.940 17:57:11 -- common/autotest_common.sh@1076 -- # kill 75225 00:10:54.940 17:57:11 -- common/autotest_common.sh@1077 -- # wait 75225 00:10:55.878 [2024-11-26 17:57:12.495775] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:55.878 [2024-11-26 17:57:12.496544] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:55.878 [2024-11-26 17:57:12.496749] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:55.878 [2024-11-26 17:57:12.496837] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.137 [2024-11-26 17:57:13.007231] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.137 [2024-11-26 17:57:13.007537] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.137 [2024-11-26 17:57:13.007596] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.137 [2024-11-26 17:57:13.007967] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.704 [2024-11-26 17:57:13.517010] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.704 [2024-11-26 17:57:13.517575] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.704 [2024-11-26 17:57:13.517647] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:56.704 [2024-11-26 17:57:13.517708] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:58.610 [2024-11-26 17:57:15.520366] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:58.610 [2024-11-26 17:57:15.520599] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:58.610 [2024-11-26 17:57:15.520657] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:58.610 [2024-11-26 17:57:15.520710] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76163) is not found. Dropping the request. 00:10:58.868 17:57:15 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:58.868 17:57:15 -- common/autotest_common.sh@1083 -- # echo 2 00:10:58.868 17:57:15 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:58.868 17:57:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:58.868 17:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.868 17:57:15 -- common/autotest_common.sh@10 -- # set +x 00:10:58.868 ************************************ 00:10:58.868 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:58.868 ************************************ 00:10:58.868 17:57:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:58.868 * Looking for test storage... 00:10:58.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:58.869 17:57:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:59.126 17:57:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:59.126 17:57:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:59.126 17:57:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:59.126 17:57:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:59.126 17:57:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:59.126 17:57:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:59.126 17:57:15 -- scripts/common.sh@335 -- # IFS=.-: 00:10:59.126 17:57:15 -- scripts/common.sh@335 -- # read -ra ver1 00:10:59.126 17:57:15 -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.126 17:57:15 -- scripts/common.sh@336 -- # read -ra ver2 00:10:59.126 17:57:15 -- scripts/common.sh@337 -- # local 'op=<' 00:10:59.126 17:57:15 -- scripts/common.sh@339 -- # ver1_l=2 00:10:59.126 17:57:15 -- scripts/common.sh@340 -- # ver2_l=1 00:10:59.126 17:57:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:59.126 17:57:15 -- scripts/common.sh@343 -- # case "$op" in 00:10:59.126 17:57:15 -- scripts/common.sh@344 -- # : 1 00:10:59.126 17:57:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:59.126 17:57:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.126 17:57:15 -- scripts/common.sh@364 -- # decimal 1 00:10:59.126 17:57:15 -- scripts/common.sh@352 -- # local d=1 00:10:59.126 17:57:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.126 17:57:15 -- scripts/common.sh@354 -- # echo 1 00:10:59.126 17:57:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:59.126 17:57:15 -- scripts/common.sh@365 -- # decimal 2 00:10:59.126 17:57:15 -- scripts/common.sh@352 -- # local d=2 00:10:59.126 17:57:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.126 17:57:15 -- scripts/common.sh@354 -- # echo 2 00:10:59.126 17:57:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:59.126 17:57:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:59.126 17:57:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:59.126 17:57:15 -- scripts/common.sh@367 -- # return 0 00:10:59.126 17:57:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.126 17:57:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.126 --rc genhtml_branch_coverage=1 00:10:59.126 --rc genhtml_function_coverage=1 00:10:59.126 --rc genhtml_legend=1 00:10:59.126 --rc geninfo_all_blocks=1 00:10:59.126 --rc geninfo_unexecuted_blocks=1 00:10:59.126 00:10:59.126 ' 00:10:59.126 17:57:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.126 --rc genhtml_branch_coverage=1 00:10:59.126 --rc genhtml_function_coverage=1 00:10:59.126 --rc genhtml_legend=1 00:10:59.126 --rc geninfo_all_blocks=1 00:10:59.126 --rc geninfo_unexecuted_blocks=1 00:10:59.126 00:10:59.126 ' 00:10:59.126 17:57:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.126 --rc genhtml_branch_coverage=1 00:10:59.126 --rc genhtml_function_coverage=1 00:10:59.126 --rc genhtml_legend=1 00:10:59.126 --rc geninfo_all_blocks=1 00:10:59.126 --rc geninfo_unexecuted_blocks=1 00:10:59.126 00:10:59.126 ' 00:10:59.126 17:57:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:59.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.126 --rc genhtml_branch_coverage=1 00:10:59.126 --rc genhtml_function_coverage=1 00:10:59.126 --rc genhtml_legend=1 00:10:59.126 --rc geninfo_all_blocks=1 00:10:59.126 --rc geninfo_unexecuted_blocks=1 00:10:59.126 00:10:59.126 ' 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:59.126 17:57:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:59.126 17:57:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:59.126 17:57:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:59.127 17:57:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:59.127 17:57:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:59.127 17:57:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:59.127 17:57:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:59.127 17:57:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:59.127 17:57:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:59.127 17:57:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:59.127 17:57:16 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:59.127 17:57:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:59.127 17:57:16 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76487 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:59.127 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76487 00:10:59.127 17:57:16 -- common/autotest_common.sh@829 -- # '[' -z 76487 ']' 00:10:59.127 17:57:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.127 17:57:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.127 17:57:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.127 17:57:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.127 17:57:16 -- common/autotest_common.sh@10 -- # set +x 00:10:59.385 [2024-11-26 17:57:16.107797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:59.385 [2024-11-26 17:57:16.108117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76487 ] 00:10:59.385 [2024-11-26 17:57:16.275519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.644 [2024-11-26 17:57:16.320614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:59.644 [2024-11-26 17:57:16.321508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.644 [2024-11-26 17:57:16.321744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.644 [2024-11-26 17:57:16.321698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.644 [2024-11-26 17:57:16.321874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.212 17:57:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.212 17:57:16 -- common/autotest_common.sh@862 -- # return 0 00:11:00.212 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:00.212 17:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.212 17:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:00.212 nvme0n1 00:11:00.212 17:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.212 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:00.212 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_nYv2G.txt 00:11:00.212 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:00.212 17:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.212 17:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:00.212 true 00:11:00.212 17:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.212 17:57:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:00.212 17:57:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732643837 00:11:00.212 17:57:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76510 00:11:00.212 17:57:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:00.212 17:57:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:00.212 17:57:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:02.118 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:02.118 17:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.118 17:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.118 [2024-11-26 17:57:19.013305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:02.118 [2024-11-26 17:57:19.013686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:02.118 [2024-11-26 17:57:19.013744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:02.118 [2024-11-26 17:57:19.013761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.118 [2024-11-26 17:57:19.015583] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:02.118 17:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.118 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76510 00:11:02.118 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76510 00:11:02.118 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76510 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:02.377 17:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.377 17:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.377 17:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_nYv2G.txt 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_nYv2G.txt 00:11:02.377 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76487 00:11:02.377 17:57:19 -- common/autotest_common.sh@936 -- # '[' -z 76487 ']' 00:11:02.377 17:57:19 -- common/autotest_common.sh@940 -- # kill -0 76487 00:11:02.377 17:57:19 -- common/autotest_common.sh@941 -- # uname 00:11:02.377 17:57:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.377 17:57:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76487 00:11:02.377 killing process with pid 76487 00:11:02.377 17:57:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.377 17:57:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.377 17:57:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76487' 00:11:02.377 17:57:19 -- common/autotest_common.sh@955 -- # kill 76487 00:11:02.377 17:57:19 -- common/autotest_common.sh@960 -- # wait 76487 00:11:02.944 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:02.944 17:57:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:02.944 ************************************ 00:11:02.944 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:02.944 ************************************ 00:11:02.944 00:11:02.944 real 0m3.909s 00:11:02.944 user 0m13.193s 00:11:02.944 sys 0m0.761s 00:11:02.944 17:57:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.944 17:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.944 17:57:19 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:02.945 17:57:19 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:02.945 17:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:02.945 17:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.945 17:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.945 ************************************ 00:11:02.945 START TEST nvme_fio 00:11:02.945 ************************************ 00:11:02.945 17:57:19 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:11:02.945 17:57:19 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:02.945 17:57:19 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:02.945 17:57:19 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:02.945 17:57:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:02.945 17:57:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:02.945 17:57:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:02.945 17:57:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:02.945 17:57:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:02.945 17:57:19 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:02.945 17:57:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:02.945 17:57:19 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:02.945 17:57:19 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:02.945 17:57:19 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:02.945 17:57:19 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:02.945 17:57:19 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:03.205 17:57:20 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:03.205 17:57:20 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:03.464 17:57:20 -- nvme/nvme.sh@41 -- # bs=4096 00:11:03.464 17:57:20 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.464 17:57:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.464 17:57:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:03.464 17:57:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:03.464 17:57:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:03.464 17:57:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.464 17:57:20 -- common/autotest_common.sh@1330 -- # shift 00:11:03.464 17:57:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:03.464 17:57:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:03.464 17:57:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:03.464 17:57:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.464 17:57:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:03.464 17:57:20 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:03.464 17:57:20 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:03.464 17:57:20 -- common/autotest_common.sh@1336 -- # break 00:11:03.464 17:57:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:03.464 17:57:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:03.723 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:03.723 fio-3.35 00:11:03.723 Starting 1 thread 00:11:07.927 00:11:07.927 test: (groupid=0, jobs=1): err= 0: pid=76645: Tue Nov 26 17:57:24 2024 00:11:07.927 read: IOPS=22.7k, BW=88.7MiB/s (93.0MB/s)(177MiB/2001msec) 00:11:07.927 slat (nsec): min=3655, max=48897, avg=4456.42, stdev=1174.66 00:11:07.927 clat (usec): min=244, max=8518, avg=2817.42, stdev=418.83 00:11:07.927 lat (usec): min=249, max=8530, avg=2821.88, stdev=419.34 00:11:07.927 clat percentiles (usec): 00:11:07.927 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:07.927 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:07.927 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 3163], 95.00th=[ 3392], 00:11:07.927 | 99.00th=[ 3982], 99.50th=[ 5342], 99.90th=[ 8291], 99.95th=[ 8455], 00:11:07.927 | 99.99th=[ 8455] 00:11:07.927 bw ( KiB/s): min=84152, max=94672, per=98.95%, avg=89874.67, stdev=5320.69, samples=3 00:11:07.927 iops : min=21038, max=23668, avg=22468.67, stdev=1330.17, samples=3 00:11:07.927 write: IOPS=22.6k, BW=88.2MiB/s (92.4MB/s)(176MiB/2001msec); 0 zone resets 00:11:07.927 slat (nsec): min=3776, max=63523, avg=4687.45, stdev=1195.49 00:11:07.927 clat (usec): min=210, max=8491, avg=2821.66, stdev=400.24 00:11:07.927 lat (usec): min=215, max=8503, avg=2826.35, stdev=400.70 00:11:07.927 clat percentiles (usec): 00:11:07.927 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:07.927 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:07.927 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 3195], 95.00th=[ 3392], 00:11:07.927 | 99.00th=[ 3982], 99.50th=[ 5211], 99.90th=[ 8160], 99.95th=[ 8356], 00:11:07.927 | 99.99th=[ 8455] 00:11:07.927 bw ( KiB/s): min=84064, max=95560, per=99.77%, avg=90069.33, stdev=5765.26, samples=3 00:11:07.927 iops : min=21016, max=23890, avg=22517.33, stdev=1441.31, samples=3 00:11:07.927 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:07.927 lat (msec) : 2=0.31%, 4=98.70%, 10=0.96% 00:11:07.927 cpu : usr=99.10%, sys=0.30%, ctx=4, majf=0, minf=626 00:11:07.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:07.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.927 issued rwts: total=45435,45159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.928 00:11:07.928 Run status group 0 (all jobs): 00:11:07.928 READ: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=177MiB (186MB), run=2001-2001msec 00:11:07.928 WRITE: bw=88.2MiB/s (92.4MB/s), 88.2MiB/s-88.2MiB/s (92.4MB/s-92.4MB/s), io=176MiB (185MB), run=2001-2001msec 00:11:07.928 ----------------------------------------------------- 00:11:07.928 Suppressions used: 00:11:07.928 count bytes template 00:11:07.928 1 32 /usr/src/fio/parse.c 00:11:07.928 1 8 libtcmalloc_minimal.so 00:11:07.928 ----------------------------------------------------- 00:11:07.928 00:11:07.928 17:57:24 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:07.928 17:57:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:07.928 17:57:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:07.928 17:57:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:07.928 17:57:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:07.928 17:57:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.187 17:57:24 -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.187 17:57:24 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:08.187 17:57:24 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:08.187 17:57:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:08.187 17:57:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.187 17:57:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:08.187 17:57:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.187 17:57:24 -- common/autotest_common.sh@1330 -- # shift 00:11:08.187 17:57:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:08.187 17:57:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.187 17:57:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.187 17:57:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:08.187 17:57:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:08.187 17:57:24 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.187 17:57:24 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.187 17:57:24 -- common/autotest_common.sh@1336 -- # break 00:11:08.187 17:57:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.187 17:57:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:08.187 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.187 fio-3.35 00:11:08.187 Starting 1 thread 00:11:12.378 00:11:12.378 test: (groupid=0, jobs=1): err= 0: pid=76711: Tue Nov 26 17:57:28 2024 00:11:12.378 read: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec) 00:11:12.378 slat (usec): min=3, max=196, avg= 4.72, stdev= 1.78 00:11:12.378 clat (usec): min=198, max=12255, avg=2922.14, stdev=565.86 00:11:12.378 lat (usec): min=202, max=12326, avg=2926.86, stdev=566.72 00:11:12.378 clat percentiles (usec): 00:11:12.378 | 1.00th=[ 2474], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:11:12.378 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:12.378 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3359], 00:11:12.378 | 99.00th=[ 5669], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9634], 00:11:12.378 | 99.99th=[12125] 00:11:12.378 bw ( KiB/s): min=84936, max=88184, per=98.77%, avg=86373.33, stdev=1655.87, samples=3 00:11:12.378 iops : min=21234, max=22046, avg=21593.33, stdev=413.97, samples=3 00:11:12.378 write: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec); 0 zone resets 00:11:12.378 slat (nsec): min=3819, max=57349, avg=5011.24, stdev=1504.96 00:11:12.378 clat (usec): min=173, max=12157, avg=2934.33, stdev=578.16 00:11:12.378 lat (usec): min=178, max=12170, avg=2939.34, stdev=578.99 00:11:12.378 clat percentiles (usec): 00:11:12.378 | 1.00th=[ 2474], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2737], 00:11:12.378 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2900], 00:11:12.378 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3392], 00:11:12.378 | 99.00th=[ 5669], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9765], 00:11:12.378 | 99.99th=[11994] 00:11:12.378 bw ( KiB/s): min=85792, max=87632, per=99.69%, avg=86554.67, stdev=959.51, samples=3 00:11:12.378 iops : min=21448, max=21908, avg=21638.67, stdev=239.88, samples=3 00:11:12.378 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:12.378 lat (msec) : 2=0.26%, 4=97.54%, 10=2.12%, 20=0.04% 00:11:12.378 cpu : usr=99.35%, sys=0.00%, ctx=7, majf=0, minf=626 00:11:12.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:12.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.378 issued rwts: total=43744,43432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.378 00:11:12.378 Run status group 0 (all jobs): 00:11:12.378 READ: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:12.378 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:12.378 ----------------------------------------------------- 00:11:12.378 Suppressions used: 00:11:12.378 count bytes template 00:11:12.378 1 32 /usr/src/fio/parse.c 00:11:12.378 1 8 libtcmalloc_minimal.so 00:11:12.378 ----------------------------------------------------- 00:11:12.378 00:11:12.378 17:57:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.378 17:57:28 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:12.378 17:57:28 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:12.378 17:57:28 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:12.378 17:57:29 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:12.378 17:57:29 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:12.378 17:57:29 -- nvme/nvme.sh@41 -- # bs=4096 00:11:12.378 17:57:29 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.378 17:57:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.378 17:57:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:12.378 17:57:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:12.378 17:57:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:12.378 17:57:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.378 17:57:29 -- common/autotest_common.sh@1330 -- # shift 00:11:12.378 17:57:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:12.378 17:57:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:12.378 17:57:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:12.378 17:57:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.378 17:57:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:12.638 17:57:29 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:12.638 17:57:29 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:12.638 17:57:29 -- common/autotest_common.sh@1336 -- # break 00:11:12.638 17:57:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:12.638 17:57:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:12.638 fio-3.35 00:11:12.638 Starting 1 thread 00:11:16.828 00:11:16.828 test: (groupid=0, jobs=1): err= 0: pid=76777: Tue Nov 26 17:57:33 2024 00:11:16.828 read: IOPS=23.5k, BW=91.7MiB/s (96.2MB/s)(184MiB/2001msec) 00:11:16.828 slat (nsec): min=3682, max=78827, avg=4331.57, stdev=1066.23 00:11:16.828 clat (usec): min=167, max=11970, avg=2721.60, stdev=322.32 00:11:16.828 lat (usec): min=172, max=12049, avg=2725.93, stdev=322.71 00:11:16.828 clat percentiles (usec): 00:11:16.828 | 1.00th=[ 2073], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:11:16.828 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:16.828 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:11:16.828 | 99.00th=[ 3490], 99.50th=[ 4293], 99.90th=[ 6259], 99.95th=[ 9241], 00:11:16.828 | 99.99th=[11731] 00:11:16.828 bw ( KiB/s): min=90648, max=96088, per=99.89%, avg=93834.67, stdev=2837.56, samples=3 00:11:16.828 iops : min=22662, max=24022, avg=23458.67, stdev=709.39, samples=3 00:11:16.828 write: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec); 0 zone resets 00:11:16.828 slat (nsec): min=3816, max=43643, avg=4564.55, stdev=1071.20 00:11:16.828 clat (usec): min=265, max=11805, avg=2729.40, stdev=332.88 00:11:16.828 lat (usec): min=269, max=11818, avg=2733.97, stdev=333.26 00:11:16.828 clat percentiles (usec): 00:11:16.828 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2606], 00:11:16.828 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:16.828 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2933], 00:11:16.828 | 99.00th=[ 3687], 99.50th=[ 4424], 99.90th=[ 7308], 99.95th=[ 9503], 00:11:16.828 | 99.99th=[11469] 00:11:16.828 bw ( KiB/s): min=90152, max=96296, per=100.00%, avg=93896.00, stdev=3285.11, samples=3 00:11:16.828 iops : min=22538, max=24074, avg=23474.00, stdev=821.28, samples=3 00:11:16.828 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:16.828 lat (msec) : 2=0.73%, 4=98.41%, 10=0.77%, 20=0.04% 00:11:16.828 cpu : usr=99.50%, sys=0.00%, ctx=5, majf=0, minf=628 00:11:16.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:16.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.828 issued rwts: total=46994,46668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.828 00:11:16.828 Run status group 0 (all jobs): 00:11:16.828 READ: bw=91.7MiB/s (96.2MB/s), 91.7MiB/s-91.7MiB/s (96.2MB/s-96.2MB/s), io=184MiB (192MB), run=2001-2001msec 00:11:16.828 WRITE: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:11:16.828 ----------------------------------------------------- 00:11:16.828 Suppressions used: 00:11:16.828 count bytes template 00:11:16.828 1 32 /usr/src/fio/parse.c 00:11:16.828 1 8 libtcmalloc_minimal.so 00:11:16.828 ----------------------------------------------------- 00:11:16.828 00:11:16.828 17:57:33 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:16.828 17:57:33 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:16.828 17:57:33 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:16.828 17:57:33 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:17.086 17:57:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:17.086 17:57:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:17.345 17:57:34 -- nvme/nvme.sh@41 -- # bs=4096 00:11:17.345 17:57:34 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:17.345 17:57:34 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:17.345 17:57:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:17.345 17:57:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:17.345 17:57:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:17.345 17:57:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:17.345 17:57:34 -- common/autotest_common.sh@1330 -- # shift 00:11:17.345 17:57:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:17.345 17:57:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:17.345 17:57:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:17.345 17:57:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:17.345 17:57:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:17.345 17:57:34 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:17.345 17:57:34 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:17.345 17:57:34 -- common/autotest_common.sh@1336 -- # break 00:11:17.345 17:57:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:17.345 17:57:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:17.345 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:17.345 fio-3.35 00:11:17.345 Starting 1 thread 00:11:21.578 00:11:21.578 test: (groupid=0, jobs=1): err= 0: pid=76843: Tue Nov 26 17:57:37 2024 00:11:21.578 read: IOPS=23.9k, BW=93.2MiB/s (97.8MB/s)(187MiB/2001msec) 00:11:21.578 slat (nsec): min=3692, max=70420, avg=4409.34, stdev=1075.41 00:11:21.578 clat (usec): min=191, max=11685, avg=2677.40, stdev=313.59 00:11:21.578 lat (usec): min=195, max=11756, avg=2681.81, stdev=314.04 00:11:21.578 clat percentiles (usec): 00:11:21.578 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:11:21.578 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:11:21.578 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2933], 00:11:21.578 | 99.00th=[ 3458], 99.50th=[ 4424], 99.90th=[ 6390], 99.95th=[ 8848], 00:11:21.578 | 99.99th=[11469] 00:11:21.578 bw ( KiB/s): min=93061, max=96560, per=99.64%, avg=95143.00, stdev=1841.85, samples=3 00:11:21.578 iops : min=23265, max=24140, avg=23785.67, stdev=460.60, samples=3 00:11:21.578 write: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(185MiB/2001msec); 0 zone resets 00:11:21.578 slat (nsec): min=3825, max=68965, avg=4613.15, stdev=1036.56 00:11:21.578 clat (usec): min=206, max=11475, avg=2683.21, stdev=325.19 00:11:21.578 lat (usec): min=211, max=11488, avg=2687.82, stdev=325.62 00:11:21.578 clat percentiles (usec): 00:11:21.578 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:11:21.578 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:11:21.578 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2933], 00:11:21.578 | 99.00th=[ 3490], 99.50th=[ 4424], 99.90th=[ 6915], 99.95th=[ 9241], 00:11:21.578 | 99.99th=[11207] 00:11:21.578 bw ( KiB/s): min=92822, max=96704, per=100.00%, avg=95138.00, stdev=2046.79, samples=3 00:11:21.578 iops : min=23205, max=24176, avg=23784.33, stdev=511.98, samples=3 00:11:21.578 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:21.578 lat (msec) : 2=0.19%, 4=99.12%, 10=0.61%, 20=0.03% 00:11:21.578 cpu : usr=99.55%, sys=0.00%, ctx=4, majf=0, minf=625 00:11:21.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:21.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.578 issued rwts: total=47766,47471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.578 00:11:21.578 Run status group 0 (all jobs): 00:11:21.578 READ: bw=93.2MiB/s (97.8MB/s), 93.2MiB/s-93.2MiB/s (97.8MB/s-97.8MB/s), io=187MiB (196MB), run=2001-2001msec 00:11:21.578 WRITE: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=185MiB (194MB), run=2001-2001msec 00:11:21.578 ----------------------------------------------------- 00:11:21.578 Suppressions used: 00:11:21.578 count bytes template 00:11:21.579 1 32 /usr/src/fio/parse.c 00:11:21.579 1 8 libtcmalloc_minimal.so 00:11:21.579 ----------------------------------------------------- 00:11:21.579 00:11:21.579 ************************************ 00:11:21.579 END TEST nvme_fio 00:11:21.579 ************************************ 00:11:21.579 17:57:38 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:21.579 17:57:38 -- nvme/nvme.sh@46 -- # true 00:11:21.579 00:11:21.579 real 0m18.397s 00:11:21.579 user 0m14.523s 00:11:21.579 sys 0m3.480s 00:11:21.579 17:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.579 17:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:21.579 00:11:21.579 real 1m33.696s 00:11:21.579 user 3m30.348s 00:11:21.579 sys 0m22.188s 00:11:21.579 ************************************ 00:11:21.579 END TEST nvme 00:11:21.579 ************************************ 00:11:21.579 17:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.579 17:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:21.579 17:57:38 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:21.579 17:57:38 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:21.579 17:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.579 17:57:38 -- common/autotest_common.sh@10 -- # set +x 00:11:21.579 ************************************ 00:11:21.579 START TEST nvme_scc 00:11:21.579 ************************************ 00:11:21.579 17:57:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:21.579 * Looking for test storage... 00:11:21.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:21.579 17:57:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:21.579 17:57:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:21.579 17:57:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:21.579 17:57:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:21.579 17:57:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:21.579 17:57:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:21.579 17:57:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:21.579 17:57:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:21.579 17:57:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.579 17:57:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:21.579 17:57:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:21.579 17:57:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:21.579 17:57:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:21.579 17:57:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:21.579 17:57:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:21.579 17:57:38 -- scripts/common.sh@344 -- # : 1 00:11:21.579 17:57:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:21.579 17:57:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.579 17:57:38 -- scripts/common.sh@364 -- # decimal 1 00:11:21.579 17:57:38 -- scripts/common.sh@352 -- # local d=1 00:11:21.579 17:57:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.579 17:57:38 -- scripts/common.sh@354 -- # echo 1 00:11:21.579 17:57:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:21.579 17:57:38 -- scripts/common.sh@365 -- # decimal 2 00:11:21.579 17:57:38 -- scripts/common.sh@352 -- # local d=2 00:11:21.579 17:57:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.579 17:57:38 -- scripts/common.sh@354 -- # echo 2 00:11:21.579 17:57:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:21.579 17:57:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:21.579 17:57:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:21.579 17:57:38 -- scripts/common.sh@367 -- # return 0 00:11:21.579 17:57:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:21.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.579 --rc genhtml_branch_coverage=1 00:11:21.579 --rc genhtml_function_coverage=1 00:11:21.579 --rc genhtml_legend=1 00:11:21.579 --rc geninfo_all_blocks=1 00:11:21.579 --rc geninfo_unexecuted_blocks=1 00:11:21.579 00:11:21.579 ' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:21.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.579 --rc genhtml_branch_coverage=1 00:11:21.579 --rc genhtml_function_coverage=1 00:11:21.579 --rc genhtml_legend=1 00:11:21.579 --rc geninfo_all_blocks=1 00:11:21.579 --rc geninfo_unexecuted_blocks=1 00:11:21.579 00:11:21.579 ' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:21.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.579 --rc genhtml_branch_coverage=1 00:11:21.579 --rc genhtml_function_coverage=1 00:11:21.579 --rc genhtml_legend=1 00:11:21.579 --rc geninfo_all_blocks=1 00:11:21.579 --rc geninfo_unexecuted_blocks=1 00:11:21.579 00:11:21.579 ' 00:11:21.579 17:57:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:21.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.579 --rc genhtml_branch_coverage=1 00:11:21.579 --rc genhtml_function_coverage=1 00:11:21.579 --rc genhtml_legend=1 00:11:21.579 --rc geninfo_all_blocks=1 00:11:21.579 --rc geninfo_unexecuted_blocks=1 00:11:21.579 00:11:21.579 ' 00:11:21.579 17:57:38 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:21.579 17:57:38 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:21.579 17:57:38 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:21.579 17:57:38 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:21.579 17:57:38 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.579 17:57:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.579 17:57:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.579 17:57:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.579 17:57:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.579 17:57:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.579 17:57:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.579 17:57:38 -- paths/export.sh@5 -- # export PATH 00:11:21.579 17:57:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.579 17:57:38 -- nvme/functions.sh@10 -- # ctrls=() 00:11:21.579 17:57:38 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:21.579 17:57:38 -- nvme/functions.sh@11 -- # nvmes=() 00:11:21.579 17:57:38 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:21.579 17:57:38 -- nvme/functions.sh@12 -- # bdfs=() 00:11:21.579 17:57:38 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:21.579 17:57:38 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:21.579 17:57:38 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:21.579 17:57:38 -- nvme/functions.sh@14 -- # nvme_name= 00:11:21.579 17:57:38 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.579 17:57:38 -- nvme/nvme_scc.sh@12 -- # uname 00:11:21.579 17:57:38 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:21.579 17:57:38 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:21.579 17:57:38 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:22.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.518 Waiting for block devices as requested 00:11:22.518 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.778 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.778 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.778 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:28.127 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:28.127 17:57:44 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:28.127 17:57:44 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:28.127 17:57:44 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.127 17:57:44 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:28.127 17:57:44 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:28.127 17:57:44 -- scripts/common.sh@15 -- # local i 00:11:28.127 17:57:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:28.127 17:57:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.127 17:57:44 -- scripts/common.sh@24 -- # return 0 00:11:28.127 17:57:44 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:28.127 17:57:44 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:28.127 17:57:44 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@18 -- # shift 00:11:28.127 17:57:44 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.127 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:28.127 17:57:44 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:28.127 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.128 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.128 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.128 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:28.129 17:57:44 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:28.129 17:57:44 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:28.129 17:57:44 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:28.129 17:57:44 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:28.129 17:57:44 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.129 17:57:44 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:28.129 17:57:44 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:28.129 17:57:44 -- scripts/common.sh@15 -- # local i 00:11:28.129 17:57:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:28.129 17:57:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.129 17:57:44 -- scripts/common.sh@24 -- # return 0 00:11:28.129 17:57:44 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:28.129 17:57:44 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:28.129 17:57:44 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@18 -- # shift 00:11:28.129 17:57:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.129 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:28.129 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:28.129 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.130 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.130 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:28.130 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.131 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.131 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:28.131 17:57:44 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:28.132 17:57:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.132 17:57:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:28.132 17:57:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:28.132 17:57:44 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@18 -- # shift 00:11:28.132 17:57:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.132 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:28.132 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.132 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.133 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.133 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.133 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:28.134 17:57:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.134 17:57:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:28.134 17:57:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:28.134 17:57:44 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@18 -- # shift 00:11:28.134 17:57:44 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.134 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:28.134 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:28.134 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.135 17:57:44 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:44 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:44 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:28.135 17:57:44 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.135 17:57:44 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:28.135 17:57:44 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:28.135 17:57:44 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:28.135 17:57:44 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@18 -- # shift 00:11:28.135 17:57:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.135 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:28.135 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:28.135 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.136 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.136 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.136 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.137 17:57:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:28.137 17:57:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:28.137 17:57:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:28.137 17:57:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:28.137 17:57:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:28.137 17:57:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.137 17:57:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:28.137 17:57:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:28.137 17:57:45 -- scripts/common.sh@15 -- # local i 00:11:28.137 17:57:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:28.137 17:57:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.137 17:57:45 -- scripts/common.sh@24 -- # return 0 00:11:28.137 17:57:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:28.137 17:57:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:28.137 17:57:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@18 -- # shift 00:11:28.137 17:57:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.137 17:57:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:28.137 17:57:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.137 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.433 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.433 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:28.433 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.434 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.434 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:28.434 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.435 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:28.435 17:57:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:28.435 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:28.436 17:57:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.436 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.436 17:57:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:28.436 17:57:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.436 17:57:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:28.436 17:57:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:28.437 17:57:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:28.437 17:57:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@18 -- # shift 00:11:28.437 17:57:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.437 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.437 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.437 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:28.438 17:57:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:28.438 17:57:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:28.438 17:57:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:28.438 17:57:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:28.438 17:57:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.438 17:57:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:28.438 17:57:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:28.438 17:57:45 -- scripts/common.sh@15 -- # local i 00:11:28.438 17:57:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:28.438 17:57:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.438 17:57:45 -- scripts/common.sh@24 -- # return 0 00:11:28.438 17:57:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:28.438 17:57:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:28.438 17:57:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@18 -- # shift 00:11:28.438 17:57:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.438 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.438 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.438 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:28.439 17:57:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.439 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.439 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:28.440 17:57:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.440 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.440 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.441 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:28.441 17:57:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:28.441 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:28.442 17:57:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.442 17:57:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:28.442 17:57:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:28.442 17:57:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@18 -- # shift 00:11:28.442 17:57:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.442 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:28.442 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.442 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.443 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.443 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.443 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.444 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.444 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.444 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.444 17:57:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.444 17:57:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # IFS=: 00:11:28.444 17:57:45 -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.444 17:57:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:28.444 17:57:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:28.444 17:57:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:28.444 17:57:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:28.444 17:57:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:28.444 17:57:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:28.444 17:57:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:28.444 17:57:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:28.444 17:57:45 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:28.444 17:57:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:28.444 17:57:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # echo nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:28.444 17:57:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:28.444 17:57:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:28.444 17:57:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:28.444 17:57:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # echo nvme0 00:11:28.444 17:57:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # echo nvme3 00:11:28.444 17:57:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:28.444 17:57:45 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:28.444 17:57:45 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:28.444 17:57:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:28.444 17:57:45 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:28.444 17:57:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:28.444 17:57:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:28.444 17:57:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@197 -- # echo nvme2 00:11:28.444 17:57:45 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:28.444 17:57:45 -- nvme/functions.sh@206 -- # echo nvme1 00:11:28.444 17:57:45 -- nvme/functions.sh@207 -- # return 0 00:11:28.444 17:57:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:28.444 17:57:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:28.444 17:57:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:29.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.817 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.817 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.074 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.074 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.074 17:57:46 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:30.074 17:57:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:30.074 17:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.074 17:57:46 -- common/autotest_common.sh@10 -- # set +x 00:11:30.074 ************************************ 00:11:30.074 START TEST nvme_simple_copy 00:11:30.074 ************************************ 00:11:30.074 17:57:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:30.333 Initializing NVMe Controllers 00:11:30.333 Attaching to 0000:00:08.0 00:11:30.333 Controller supports SCC. Attached to 0000:00:08.0 00:11:30.333 Namespace ID: 1 size: 4GB 00:11:30.333 Initialization complete. 00:11:30.333 00:11:30.333 Controller QEMU NVMe Ctrl (12342 ) 00:11:30.333 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:30.333 Namespace Block Size:4096 00:11:30.333 Writing LBAs 0 to 63 with Random Data 00:11:30.333 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:30.333 LBAs matching Written Data: 64 00:11:30.333 00:11:30.333 real 0m0.254s 00:11:30.333 user 0m0.080s 00:11:30.333 sys 0m0.073s 00:11:30.333 17:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.333 17:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.333 ************************************ 00:11:30.333 END TEST nvme_simple_copy 00:11:30.333 ************************************ 00:11:30.333 00:11:30.333 real 0m9.038s 00:11:30.333 user 0m1.572s 00:11:30.333 sys 0m2.566s 00:11:30.333 17:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.333 17:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.333 ************************************ 00:11:30.333 END TEST nvme_scc 00:11:30.333 ************************************ 00:11:30.592 17:57:47 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:30.592 17:57:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:30.592 17:57:47 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:30.592 17:57:47 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:30.592 17:57:47 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:30.592 17:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.592 17:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.592 17:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.592 ************************************ 00:11:30.592 START TEST nvme_fdp 00:11:30.592 ************************************ 00:11:30.592 17:57:47 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:30.592 * Looking for test storage... 00:11:30.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:30.592 17:57:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.592 17:57:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.592 17:57:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.592 17:57:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.592 17:57:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.592 17:57:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.592 17:57:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.592 17:57:47 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.592 17:57:47 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.592 17:57:47 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.592 17:57:47 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.592 17:57:47 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.592 17:57:47 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.592 17:57:47 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.592 17:57:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.592 17:57:47 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.592 17:57:47 -- scripts/common.sh@344 -- # : 1 00:11:30.592 17:57:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.592 17:57:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.592 17:57:47 -- scripts/common.sh@364 -- # decimal 1 00:11:30.592 17:57:47 -- scripts/common.sh@352 -- # local d=1 00:11:30.592 17:57:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.592 17:57:47 -- scripts/common.sh@354 -- # echo 1 00:11:30.592 17:57:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.592 17:57:47 -- scripts/common.sh@365 -- # decimal 2 00:11:30.592 17:57:47 -- scripts/common.sh@352 -- # local d=2 00:11:30.592 17:57:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.592 17:57:47 -- scripts/common.sh@354 -- # echo 2 00:11:30.592 17:57:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.592 17:57:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.592 17:57:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.593 17:57:47 -- scripts/common.sh@367 -- # return 0 00:11:30.593 17:57:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.593 17:57:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.593 --rc genhtml_branch_coverage=1 00:11:30.593 --rc genhtml_function_coverage=1 00:11:30.593 --rc genhtml_legend=1 00:11:30.593 --rc geninfo_all_blocks=1 00:11:30.593 --rc geninfo_unexecuted_blocks=1 00:11:30.593 00:11:30.593 ' 00:11:30.593 17:57:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.593 --rc genhtml_branch_coverage=1 00:11:30.593 --rc genhtml_function_coverage=1 00:11:30.593 --rc genhtml_legend=1 00:11:30.593 --rc geninfo_all_blocks=1 00:11:30.593 --rc geninfo_unexecuted_blocks=1 00:11:30.593 00:11:30.593 ' 00:11:30.593 17:57:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.593 --rc genhtml_branch_coverage=1 00:11:30.593 --rc genhtml_function_coverage=1 00:11:30.593 --rc genhtml_legend=1 00:11:30.593 --rc geninfo_all_blocks=1 00:11:30.593 --rc geninfo_unexecuted_blocks=1 00:11:30.593 00:11:30.593 ' 00:11:30.593 17:57:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.593 --rc genhtml_branch_coverage=1 00:11:30.593 --rc genhtml_function_coverage=1 00:11:30.593 --rc genhtml_legend=1 00:11:30.593 --rc geninfo_all_blocks=1 00:11:30.593 --rc geninfo_unexecuted_blocks=1 00:11:30.593 00:11:30.593 ' 00:11:30.593 17:57:47 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:30.593 17:57:47 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:30.593 17:57:47 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:30.593 17:57:47 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:30.593 17:57:47 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.851 17:57:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.851 17:57:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.851 17:57:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.851 17:57:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.851 17:57:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.851 17:57:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.851 17:57:47 -- paths/export.sh@5 -- # export PATH 00:11:30.851 17:57:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.851 17:57:47 -- nvme/functions.sh@10 -- # ctrls=() 00:11:30.851 17:57:47 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:30.851 17:57:47 -- nvme/functions.sh@11 -- # nvmes=() 00:11:30.851 17:57:47 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:30.851 17:57:47 -- nvme/functions.sh@12 -- # bdfs=() 00:11:30.851 17:57:47 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:30.851 17:57:47 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:30.851 17:57:47 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:30.852 17:57:47 -- nvme/functions.sh@14 -- # nvme_name= 00:11:30.852 17:57:47 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.852 17:57:47 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:31.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:31.678 Waiting for block devices as requested 00:11:31.678 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.678 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.936 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.936 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:37.219 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:37.219 17:57:53 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:37.219 17:57:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:37.219 17:57:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:37.219 17:57:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:37.219 17:57:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:37.219 17:57:53 -- scripts/common.sh@15 -- # local i 00:11:37.219 17:57:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:37.219 17:57:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:37.219 17:57:53 -- scripts/common.sh@24 -- # return 0 00:11:37.219 17:57:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:37.219 17:57:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:37.219 17:57:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@18 -- # shift 00:11:37.219 17:57:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:37.219 17:57:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.219 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.219 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:37.219 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.220 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.220 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.220 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:37.221 17:57:53 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:37.221 17:57:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:37.221 17:57:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:37.221 17:57:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:37.221 17:57:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:37.221 17:57:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:37.221 17:57:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:37.221 17:57:53 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:37.221 17:57:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:37.221 17:57:53 -- scripts/common.sh@15 -- # local i 00:11:37.221 17:57:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:37.221 17:57:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:37.221 17:57:53 -- scripts/common.sh@24 -- # return 0 00:11:37.221 17:57:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:37.221 17:57:53 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:37.221 17:57:53 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@18 -- # shift 00:11:37.221 17:57:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.221 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.221 17:57:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:37.221 17:57:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.222 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:37.222 17:57:53 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:37.222 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.223 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:37.223 17:57:53 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:37.223 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:37.224 17:57:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:37.224 17:57:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:37.224 17:57:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:37.224 17:57:53 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@18 -- # shift 00:11:37.224 17:57:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.224 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.224 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:37.224 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.225 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.225 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:37.225 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:37.226 17:57:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:37.226 17:57:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:37.226 17:57:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:37.226 17:57:53 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@18 -- # shift 00:11:37.226 17:57:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.226 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.226 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.226 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:37.227 17:57:53 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:37.227 17:57:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:37.227 17:57:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:37.227 17:57:53 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:37.227 17:57:53 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:37.227 17:57:53 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@18 -- # shift 00:11:37.227 17:57:53 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:53 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.227 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.227 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:37.227 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:37.228 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.228 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.228 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:37.229 17:57:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:37.229 17:57:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:37.229 17:57:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:37.229 17:57:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:37.229 17:57:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:37.229 17:57:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:37.229 17:57:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:37.229 17:57:54 -- scripts/common.sh@15 -- # local i 00:11:37.229 17:57:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:37.229 17:57:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:37.229 17:57:54 -- scripts/common.sh@24 -- # return 0 00:11:37.229 17:57:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:37.229 17:57:54 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:37.229 17:57:54 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@18 -- # shift 00:11:37.229 17:57:54 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.229 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.229 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.229 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:37.230 17:57:54 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.230 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.230 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:37.231 17:57:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:37.231 17:57:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:37.231 17:57:54 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:37.231 17:57:54 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@18 -- # shift 00:11:37.231 17:57:54 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:37.231 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.231 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.231 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.232 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.232 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:37.232 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:37.233 17:57:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:37.233 17:57:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:37.233 17:57:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:37.233 17:57:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:37.233 17:57:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:37.233 17:57:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:37.233 17:57:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:37.233 17:57:54 -- scripts/common.sh@15 -- # local i 00:11:37.233 17:57:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:37.233 17:57:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:37.233 17:57:54 -- scripts/common.sh@24 -- # return 0 00:11:37.233 17:57:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:37.233 17:57:54 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:37.233 17:57:54 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@18 -- # shift 00:11:37.233 17:57:54 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.233 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:37.233 17:57:54 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:37.233 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.234 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:37.234 17:57:54 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.234 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:37.235 17:57:54 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.235 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.235 17:57:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:37.236 17:57:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:37.236 17:57:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:37.236 17:57:54 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:37.236 17:57:54 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:37.236 17:57:54 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:37.236 17:57:54 -- nvme/functions.sh@18 -- # shift 00:11:37.236 17:57:54 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:37.236 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.236 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.236 17:57:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.496 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:37.496 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.496 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.497 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.497 17:57:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:37.497 17:57:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:37.498 17:57:54 -- nvme/functions.sh@21 -- # IFS=: 00:11:37.498 17:57:54 -- nvme/functions.sh@21 -- # read -r reg val 00:11:37.498 17:57:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:37.498 17:57:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:37.498 17:57:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:37.498 17:57:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:37.498 17:57:54 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:37.498 17:57:54 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:37.498 17:57:54 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:37.498 17:57:54 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:37.498 17:57:54 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.498 17:57:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:37.498 17:57:54 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:37.498 17:57:54 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:37.498 17:57:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:37.498 17:57:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.498 17:57:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:37.498 17:57:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@197 -- # echo nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.498 17:57:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:37.498 17:57:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:37.498 17:57:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:37.498 17:57:54 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:37.498 17:57:54 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:37.498 17:57:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:37.498 17:57:54 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:37.498 17:57:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:37.498 17:57:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:37.498 17:57:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # trap - ERR 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # print_backtrace 00:11:37.498 17:57:54 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:37.498 17:57:54 -- common/autotest_common.sh@1142 -- # return 0 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # trap - ERR 00:11:37.498 17:57:54 -- nvme/functions.sh@204 -- # print_backtrace 00:11:37.498 17:57:54 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:37.498 17:57:54 -- common/autotest_common.sh@1142 -- # return 0 00:11:37.498 17:57:54 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:37.498 17:57:54 -- nvme/functions.sh@206 -- # echo nvme0 00:11:37.498 17:57:54 -- nvme/functions.sh@207 -- # return 0 00:11:37.498 17:57:54 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:37.498 17:57:54 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:37.498 17:57:54 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:38.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:38.696 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.696 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.696 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.954 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:38.954 17:57:55 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:38.954 17:57:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:38.954 17:57:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.954 17:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:38.954 ************************************ 00:11:38.954 START TEST nvme_flexible_data_placement 00:11:38.954 ************************************ 00:11:38.954 17:57:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:39.214 Initializing NVMe Controllers 00:11:39.214 Attaching to 0000:00:09.0 00:11:39.214 Controller supports FDP Attached to 0000:00:09.0 00:11:39.214 Namespace ID: 1 Endurance Group ID: 1 00:11:39.214 Initialization complete. 00:11:39.214 00:11:39.214 ================================== 00:11:39.214 == FDP tests for Namespace: #01 == 00:11:39.214 ================================== 00:11:39.214 00:11:39.214 Get Feature: FDP: 00:11:39.214 ================= 00:11:39.214 Enabled: Yes 00:11:39.214 FDP configuration Index: 0 00:11:39.214 00:11:39.214 FDP configurations log page 00:11:39.214 =========================== 00:11:39.214 Number of FDP configurations: 1 00:11:39.214 Version: 0 00:11:39.214 Size: 112 00:11:39.214 FDP Configuration Descriptor: 0 00:11:39.214 Descriptor Size: 96 00:11:39.214 Reclaim Group Identifier format: 2 00:11:39.214 FDP Volatile Write Cache: Not Present 00:11:39.214 FDP Configuration: Valid 00:11:39.214 Vendor Specific Size: 0 00:11:39.214 Number of Reclaim Groups: 2 00:11:39.214 Number of Recalim Unit Handles: 8 00:11:39.214 Max Placement Identifiers: 128 00:11:39.214 Number of Namespaces Suppprted: 256 00:11:39.214 Reclaim unit Nominal Size: 6000000 bytes 00:11:39.214 Estimated Reclaim Unit Time Limit: Not Reported 00:11:39.214 RUH Desc #000: RUH Type: Initially Isolated 00:11:39.214 RUH Desc #001: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #002: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #003: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #004: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #005: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #006: RUH Type: Initially Isolated 00:11:39.215 RUH Desc #007: RUH Type: Initially Isolated 00:11:39.215 00:11:39.215 FDP reclaim unit handle usage log page 00:11:39.215 ====================================== 00:11:39.215 Number of Reclaim Unit Handles: 8 00:11:39.215 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:39.215 RUH Usage Desc #001: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #002: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #003: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #004: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #005: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #006: RUH Attributes: Unused 00:11:39.215 RUH Usage Desc #007: RUH Attributes: Unused 00:11:39.215 00:11:39.215 FDP statistics log page 00:11:39.215 ======================= 00:11:39.215 Host bytes with metadata written: 1762840576 00:11:39.215 Media bytes with metadata written: 1762983936 00:11:39.215 Media bytes erased: 0 00:11:39.215 00:11:39.215 FDP Reclaim unit handle status 00:11:39.215 ============================== 00:11:39.215 Number of RUHS descriptors: 2 00:11:39.215 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002ed3 00:11:39.215 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:39.215 00:11:39.215 FDP write on placement id: 0 success 00:11:39.215 00:11:39.215 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:39.215 00:11:39.215 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:39.215 00:11:39.215 Get Feature: FDP Events for Placement handle: #0 00:11:39.215 ======================== 00:11:39.215 Number of FDP Events: 6 00:11:39.215 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:39.215 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:39.215 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:39.215 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:39.215 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:39.215 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:39.215 00:11:39.215 FDP events log page 00:11:39.215 =================== 00:11:39.215 Number of FDP events: 1 00:11:39.215 FDP Event #0: 00:11:39.215 Event Type: RU Not Written to Capacity 00:11:39.215 Placement Identifier: Valid 00:11:39.215 NSID: Valid 00:11:39.215 Location: Valid 00:11:39.215 Placement Identifier: 0 00:11:39.215 Event Timestamp: 3 00:11:39.215 Namespace Identifier: 1 00:11:39.215 Reclaim Group Identifier: 0 00:11:39.215 Reclaim Unit Handle Identifier: 0 00:11:39.215 00:11:39.215 FDP test passed 00:11:39.215 00:11:39.215 real 0m0.239s 00:11:39.215 user 0m0.060s 00:11:39.215 sys 0m0.078s 00:11:39.215 17:57:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:39.215 17:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.215 ************************************ 00:11:39.215 END TEST nvme_flexible_data_placement 00:11:39.215 ************************************ 00:11:39.215 00:11:39.215 real 0m8.761s 00:11:39.215 user 0m1.488s 00:11:39.215 sys 0m2.454s 00:11:39.215 17:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:39.215 17:57:56 -- common/autotest_common.sh@10 -- # set +x 00:11:39.215 ************************************ 00:11:39.215 END TEST nvme_fdp 00:11:39.215 ************************************ 00:11:39.215 17:57:56 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:39.215 17:57:56 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:39.215 17:57:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:39.215 17:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:39.215 17:57:56 -- common/autotest_common.sh@10 -- # set +x 00:11:39.215 ************************************ 00:11:39.215 START TEST nvme_rpc 00:11:39.215 ************************************ 00:11:39.215 17:57:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:39.474 * Looking for test storage... 00:11:39.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.474 17:57:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:39.474 17:57:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:39.474 17:57:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:39.474 17:57:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:39.474 17:57:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:39.474 17:57:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:39.474 17:57:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:39.474 17:57:56 -- scripts/common.sh@335 -- # IFS=.-: 00:11:39.474 17:57:56 -- scripts/common.sh@335 -- # read -ra ver1 00:11:39.474 17:57:56 -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.474 17:57:56 -- scripts/common.sh@336 -- # read -ra ver2 00:11:39.474 17:57:56 -- scripts/common.sh@337 -- # local 'op=<' 00:11:39.474 17:57:56 -- scripts/common.sh@339 -- # ver1_l=2 00:11:39.474 17:57:56 -- scripts/common.sh@340 -- # ver2_l=1 00:11:39.474 17:57:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:39.474 17:57:56 -- scripts/common.sh@343 -- # case "$op" in 00:11:39.474 17:57:56 -- scripts/common.sh@344 -- # : 1 00:11:39.474 17:57:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:39.474 17:57:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.474 17:57:56 -- scripts/common.sh@364 -- # decimal 1 00:11:39.474 17:57:56 -- scripts/common.sh@352 -- # local d=1 00:11:39.474 17:57:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.474 17:57:56 -- scripts/common.sh@354 -- # echo 1 00:11:39.474 17:57:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:39.474 17:57:56 -- scripts/common.sh@365 -- # decimal 2 00:11:39.474 17:57:56 -- scripts/common.sh@352 -- # local d=2 00:11:39.474 17:57:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.474 17:57:56 -- scripts/common.sh@354 -- # echo 2 00:11:39.474 17:57:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:39.474 17:57:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:39.474 17:57:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:39.474 17:57:56 -- scripts/common.sh@367 -- # return 0 00:11:39.474 17:57:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.474 17:57:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:39.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.474 --rc genhtml_branch_coverage=1 00:11:39.474 --rc genhtml_function_coverage=1 00:11:39.474 --rc genhtml_legend=1 00:11:39.474 --rc geninfo_all_blocks=1 00:11:39.475 --rc geninfo_unexecuted_blocks=1 00:11:39.475 00:11:39.475 ' 00:11:39.475 17:57:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:39.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.475 --rc genhtml_branch_coverage=1 00:11:39.475 --rc genhtml_function_coverage=1 00:11:39.475 --rc genhtml_legend=1 00:11:39.475 --rc geninfo_all_blocks=1 00:11:39.475 --rc geninfo_unexecuted_blocks=1 00:11:39.475 00:11:39.475 ' 00:11:39.475 17:57:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:39.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.475 --rc genhtml_branch_coverage=1 00:11:39.475 --rc genhtml_function_coverage=1 00:11:39.475 --rc genhtml_legend=1 00:11:39.475 --rc geninfo_all_blocks=1 00:11:39.475 --rc geninfo_unexecuted_blocks=1 00:11:39.475 00:11:39.475 ' 00:11:39.475 17:57:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:39.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.475 --rc genhtml_branch_coverage=1 00:11:39.475 --rc genhtml_function_coverage=1 00:11:39.475 --rc genhtml_legend=1 00:11:39.475 --rc geninfo_all_blocks=1 00:11:39.475 --rc geninfo_unexecuted_blocks=1 00:11:39.475 00:11:39.475 ' 00:11:39.475 17:57:56 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:39.475 17:57:56 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:39.475 17:57:56 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:39.475 17:57:56 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:39.475 17:57:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.475 17:57:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:39.475 17:57:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:39.475 17:57:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:39.475 17:57:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.475 17:57:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.475 17:57:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:39.734 17:57:56 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:39.734 17:57:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:39.734 17:57:56 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:39.734 17:57:56 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:39.734 17:57:56 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78289 00:11:39.734 17:57:56 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:39.734 17:57:56 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:39.734 17:57:56 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78289 00:11:39.734 17:57:56 -- common/autotest_common.sh@829 -- # '[' -z 78289 ']' 00:11:39.734 17:57:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.734 17:57:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.734 17:57:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.734 17:57:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.734 17:57:56 -- common/autotest_common.sh@10 -- # set +x 00:11:39.734 [2024-11-26 17:57:56.536877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:39.735 [2024-11-26 17:57:56.537002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78289 ] 00:11:40.008 [2024-11-26 17:57:56.688041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:40.008 [2024-11-26 17:57:56.730728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:40.008 [2024-11-26 17:57:56.731150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.008 [2024-11-26 17:57:56.731244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.587 17:57:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.587 17:57:57 -- common/autotest_common.sh@862 -- # return 0 00:11:40.587 17:57:57 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:40.845 Nvme0n1 00:11:40.845 17:57:57 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:40.845 17:57:57 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:40.845 request: 00:11:40.845 { 00:11:40.845 "filename": "non_existing_file", 00:11:40.845 "bdev_name": "Nvme0n1", 00:11:40.845 "method": "bdev_nvme_apply_firmware", 00:11:40.845 "req_id": 1 00:11:40.845 } 00:11:40.845 Got JSON-RPC error response 00:11:40.845 response: 00:11:40.845 { 00:11:40.845 "code": -32603, 00:11:40.845 "message": "open file failed." 00:11:40.845 } 00:11:41.104 17:57:57 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:41.104 17:57:57 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:41.104 17:57:57 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:41.104 17:57:57 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:41.104 17:57:57 -- nvme/nvme_rpc.sh@40 -- # killprocess 78289 00:11:41.104 17:57:57 -- common/autotest_common.sh@936 -- # '[' -z 78289 ']' 00:11:41.104 17:57:57 -- common/autotest_common.sh@940 -- # kill -0 78289 00:11:41.104 17:57:57 -- common/autotest_common.sh@941 -- # uname 00:11:41.104 17:57:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.104 17:57:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78289 00:11:41.104 17:57:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:41.104 killing process with pid 78289 00:11:41.104 17:57:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:41.104 17:57:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78289' 00:11:41.104 17:57:58 -- common/autotest_common.sh@955 -- # kill 78289 00:11:41.104 17:57:58 -- common/autotest_common.sh@960 -- # wait 78289 00:11:41.671 00:11:41.671 real 0m2.274s 00:11:41.671 user 0m4.073s 00:11:41.671 sys 0m0.664s 00:11:41.671 17:57:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.671 ************************************ 00:11:41.671 END TEST nvme_rpc 00:11:41.671 ************************************ 00:11:41.671 17:57:58 -- common/autotest_common.sh@10 -- # set +x 00:11:41.671 17:57:58 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:41.671 17:57:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:41.671 17:57:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.671 17:57:58 -- common/autotest_common.sh@10 -- # set +x 00:11:41.671 ************************************ 00:11:41.671 START TEST nvme_rpc_timeouts 00:11:41.671 ************************************ 00:11:41.671 17:57:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:41.671 * Looking for test storage... 00:11:41.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.671 17:57:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:41.671 17:57:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:41.671 17:57:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:41.929 17:57:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:41.929 17:57:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:41.929 17:57:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:41.929 17:57:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:41.929 17:57:58 -- scripts/common.sh@335 -- # IFS=.-: 00:11:41.929 17:57:58 -- scripts/common.sh@335 -- # read -ra ver1 00:11:41.929 17:57:58 -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.929 17:57:58 -- scripts/common.sh@336 -- # read -ra ver2 00:11:41.929 17:57:58 -- scripts/common.sh@337 -- # local 'op=<' 00:11:41.929 17:57:58 -- scripts/common.sh@339 -- # ver1_l=2 00:11:41.929 17:57:58 -- scripts/common.sh@340 -- # ver2_l=1 00:11:41.929 17:57:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:41.929 17:57:58 -- scripts/common.sh@343 -- # case "$op" in 00:11:41.929 17:57:58 -- scripts/common.sh@344 -- # : 1 00:11:41.929 17:57:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:41.929 17:57:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.929 17:57:58 -- scripts/common.sh@364 -- # decimal 1 00:11:41.929 17:57:58 -- scripts/common.sh@352 -- # local d=1 00:11:41.929 17:57:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.929 17:57:58 -- scripts/common.sh@354 -- # echo 1 00:11:41.929 17:57:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:41.929 17:57:58 -- scripts/common.sh@365 -- # decimal 2 00:11:41.929 17:57:58 -- scripts/common.sh@352 -- # local d=2 00:11:41.929 17:57:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.929 17:57:58 -- scripts/common.sh@354 -- # echo 2 00:11:41.929 17:57:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:41.929 17:57:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.929 17:57:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:41.929 17:57:58 -- scripts/common.sh@367 -- # return 0 00:11:41.929 17:57:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.929 17:57:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.929 --rc genhtml_branch_coverage=1 00:11:41.929 --rc genhtml_function_coverage=1 00:11:41.929 --rc genhtml_legend=1 00:11:41.929 --rc geninfo_all_blocks=1 00:11:41.929 --rc geninfo_unexecuted_blocks=1 00:11:41.929 00:11:41.929 ' 00:11:41.929 17:57:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.929 --rc genhtml_branch_coverage=1 00:11:41.929 --rc genhtml_function_coverage=1 00:11:41.929 --rc genhtml_legend=1 00:11:41.929 --rc geninfo_all_blocks=1 00:11:41.929 --rc geninfo_unexecuted_blocks=1 00:11:41.929 00:11:41.929 ' 00:11:41.929 17:57:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.929 --rc genhtml_branch_coverage=1 00:11:41.929 --rc genhtml_function_coverage=1 00:11:41.929 --rc genhtml_legend=1 00:11:41.929 --rc geninfo_all_blocks=1 00:11:41.929 --rc geninfo_unexecuted_blocks=1 00:11:41.929 00:11:41.929 ' 00:11:41.929 17:57:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.929 --rc genhtml_branch_coverage=1 00:11:41.929 --rc genhtml_function_coverage=1 00:11:41.929 --rc genhtml_legend=1 00:11:41.929 --rc geninfo_all_blocks=1 00:11:41.929 --rc geninfo_unexecuted_blocks=1 00:11:41.929 00:11:41.929 ' 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78342 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78342 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78377 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:41.929 17:57:58 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78377 00:11:41.929 17:57:58 -- common/autotest_common.sh@829 -- # '[' -z 78377 ']' 00:11:41.929 17:57:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.929 17:57:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.929 17:57:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.929 17:57:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.929 17:57:58 -- common/autotest_common.sh@10 -- # set +x 00:11:41.929 [2024-11-26 17:57:58.766320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.929 [2024-11-26 17:57:58.766432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78377 ] 00:11:42.188 [2024-11-26 17:57:58.919592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:42.188 [2024-11-26 17:57:58.961001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.188 [2024-11-26 17:57:58.961386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.188 [2024-11-26 17:57:58.961497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.753 17:57:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.753 Checking default timeout settings: 00:11:42.753 17:57:59 -- common/autotest_common.sh@862 -- # return 0 00:11:42.753 17:57:59 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:42.753 17:57:59 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:43.011 Making settings changes with rpc: 00:11:43.011 17:57:59 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:43.011 17:57:59 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:43.269 Check default vs. modified settings: 00:11:43.270 17:58:00 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:43.270 17:58:00 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78342 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78342 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:43.528 Setting action_on_timeout is changed as expected. 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78342 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78342 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:43.528 Setting timeout_us is changed as expected. 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78342 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:43.528 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78342 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:43.787 Setting timeout_admin_us is changed as expected. 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78342 /tmp/settings_modified_78342 00:11:43.787 17:58:00 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78377 00:11:43.787 17:58:00 -- common/autotest_common.sh@936 -- # '[' -z 78377 ']' 00:11:43.787 17:58:00 -- common/autotest_common.sh@940 -- # kill -0 78377 00:11:43.787 17:58:00 -- common/autotest_common.sh@941 -- # uname 00:11:43.787 17:58:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:43.787 17:58:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78377 00:11:43.787 17:58:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:43.787 17:58:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:43.787 17:58:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78377' 00:11:43.787 killing process with pid 78377 00:11:43.787 17:58:00 -- common/autotest_common.sh@955 -- # kill 78377 00:11:43.787 17:58:00 -- common/autotest_common.sh@960 -- # wait 78377 00:11:44.046 RPC TIMEOUT SETTING TEST PASSED. 00:11:44.046 17:58:00 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:44.046 00:11:44.046 real 0m2.440s 00:11:44.046 user 0m4.634s 00:11:44.046 sys 0m0.702s 00:11:44.046 17:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:44.046 17:58:00 -- common/autotest_common.sh@10 -- # set +x 00:11:44.046 ************************************ 00:11:44.046 END TEST nvme_rpc_timeouts 00:11:44.046 ************************************ 00:11:44.046 17:58:00 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:44.046 17:58:00 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:44.046 17:58:00 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:44.046 17:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:44.046 17:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.046 17:58:00 -- common/autotest_common.sh@10 -- # set +x 00:11:44.305 ************************************ 00:11:44.305 START TEST nvme_xnvme 00:11:44.305 ************************************ 00:11:44.305 17:58:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:44.305 * Looking for test storage... 00:11:44.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:44.305 17:58:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:44.305 17:58:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:44.305 17:58:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:44.305 17:58:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:44.305 17:58:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:44.305 17:58:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:44.305 17:58:01 -- scripts/common.sh@335 -- # IFS=.-: 00:11:44.305 17:58:01 -- scripts/common.sh@335 -- # read -ra ver1 00:11:44.305 17:58:01 -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.305 17:58:01 -- scripts/common.sh@336 -- # read -ra ver2 00:11:44.305 17:58:01 -- scripts/common.sh@337 -- # local 'op=<' 00:11:44.305 17:58:01 -- scripts/common.sh@339 -- # ver1_l=2 00:11:44.305 17:58:01 -- scripts/common.sh@340 -- # ver2_l=1 00:11:44.305 17:58:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:44.305 17:58:01 -- scripts/common.sh@343 -- # case "$op" in 00:11:44.305 17:58:01 -- scripts/common.sh@344 -- # : 1 00:11:44.305 17:58:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:44.305 17:58:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.305 17:58:01 -- scripts/common.sh@364 -- # decimal 1 00:11:44.305 17:58:01 -- scripts/common.sh@352 -- # local d=1 00:11:44.305 17:58:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.305 17:58:01 -- scripts/common.sh@354 -- # echo 1 00:11:44.305 17:58:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:44.305 17:58:01 -- scripts/common.sh@365 -- # decimal 2 00:11:44.305 17:58:01 -- scripts/common.sh@352 -- # local d=2 00:11:44.305 17:58:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.305 17:58:01 -- scripts/common.sh@354 -- # echo 2 00:11:44.305 17:58:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:44.305 17:58:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:44.305 17:58:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:44.305 17:58:01 -- scripts/common.sh@367 -- # return 0 00:11:44.305 17:58:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:44.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.305 --rc genhtml_branch_coverage=1 00:11:44.305 --rc genhtml_function_coverage=1 00:11:44.305 --rc genhtml_legend=1 00:11:44.305 --rc geninfo_all_blocks=1 00:11:44.305 --rc geninfo_unexecuted_blocks=1 00:11:44.305 00:11:44.305 ' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:44.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.305 --rc genhtml_branch_coverage=1 00:11:44.305 --rc genhtml_function_coverage=1 00:11:44.305 --rc genhtml_legend=1 00:11:44.305 --rc geninfo_all_blocks=1 00:11:44.305 --rc geninfo_unexecuted_blocks=1 00:11:44.305 00:11:44.305 ' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:44.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.305 --rc genhtml_branch_coverage=1 00:11:44.305 --rc genhtml_function_coverage=1 00:11:44.305 --rc genhtml_legend=1 00:11:44.305 --rc geninfo_all_blocks=1 00:11:44.305 --rc geninfo_unexecuted_blocks=1 00:11:44.305 00:11:44.305 ' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:44.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.305 --rc genhtml_branch_coverage=1 00:11:44.305 --rc genhtml_function_coverage=1 00:11:44.305 --rc genhtml_legend=1 00:11:44.305 --rc geninfo_all_blocks=1 00:11:44.305 --rc geninfo_unexecuted_blocks=1 00:11:44.305 00:11:44.305 ' 00:11:44.305 17:58:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.305 17:58:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.305 17:58:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.305 17:58:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.305 17:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 17:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 17:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 17:58:01 -- paths/export.sh@5 -- # export PATH 00:11:44.305 17:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.305 17:58:01 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:44.305 17:58:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:44.305 17:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.305 17:58:01 -- common/autotest_common.sh@10 -- # set +x 00:11:44.305 ************************************ 00:11:44.305 START TEST xnvme_to_malloc_dd_copy 00:11:44.305 ************************************ 00:11:44.305 17:58:01 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:44.305 17:58:01 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:44.305 17:58:01 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:44.305 17:58:01 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:44.564 17:58:01 -- dd/common.sh@191 -- # return 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@18 -- # local io 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:44.564 17:58:01 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:44.564 17:58:01 -- dd/common.sh@31 -- # xtrace_disable 00:11:44.564 17:58:01 -- common/autotest_common.sh@10 -- # set +x 00:11:44.564 { 00:11:44.564 "subsystems": [ 00:11:44.564 { 00:11:44.564 "subsystem": "bdev", 00:11:44.564 "config": [ 00:11:44.564 { 00:11:44.564 "params": { 00:11:44.564 "block_size": 512, 00:11:44.564 "num_blocks": 2097152, 00:11:44.564 "name": "malloc0" 00:11:44.564 }, 00:11:44.564 "method": "bdev_malloc_create" 00:11:44.564 }, 00:11:44.564 { 00:11:44.564 "params": { 00:11:44.564 "io_mechanism": "libaio", 00:11:44.564 "filename": "/dev/nullb0", 00:11:44.564 "name": "null0" 00:11:44.564 }, 00:11:44.564 "method": "bdev_xnvme_create" 00:11:44.564 }, 00:11:44.564 { 00:11:44.564 "method": "bdev_wait_for_examine" 00:11:44.564 } 00:11:44.564 ] 00:11:44.564 } 00:11:44.564 ] 00:11:44.564 } 00:11:44.564 [2024-11-26 17:58:01.324095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:44.564 [2024-11-26 17:58:01.324205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78499 ] 00:11:44.564 [2024-11-26 17:58:01.476299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.848 [2024-11-26 17:58:01.515212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.221  [2024-11-26T17:58:04.078Z] Copying: 263/1024 [MB] (263 MBps) [2024-11-26T17:58:05.011Z] Copying: 524/1024 [MB] (260 MBps) [2024-11-26T17:58:05.945Z] Copying: 785/1024 [MB] (260 MBps) [2024-11-26T17:58:06.512Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:11:49.586 00:11:49.586 17:58:06 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:49.586 17:58:06 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:49.586 17:58:06 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.586 17:58:06 -- common/autotest_common.sh@10 -- # set +x 00:11:49.586 { 00:11:49.586 "subsystems": [ 00:11:49.586 { 00:11:49.586 "subsystem": "bdev", 00:11:49.586 "config": [ 00:11:49.586 { 00:11:49.586 "params": { 00:11:49.586 "block_size": 512, 00:11:49.586 "num_blocks": 2097152, 00:11:49.586 "name": "malloc0" 00:11:49.586 }, 00:11:49.586 "method": "bdev_malloc_create" 00:11:49.586 }, 00:11:49.586 { 00:11:49.586 "params": { 00:11:49.586 "io_mechanism": "libaio", 00:11:49.586 "filename": "/dev/nullb0", 00:11:49.586 "name": "null0" 00:11:49.586 }, 00:11:49.586 "method": "bdev_xnvme_create" 00:11:49.586 }, 00:11:49.586 { 00:11:49.586 "method": "bdev_wait_for_examine" 00:11:49.586 } 00:11:49.586 ] 00:11:49.586 } 00:11:49.586 ] 00:11:49.586 } 00:11:49.586 [2024-11-26 17:58:06.389218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:49.586 [2024-11-26 17:58:06.389338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78567 ] 00:11:49.845 [2024-11-26 17:58:06.540831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.845 [2024-11-26 17:58:06.581275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.221  [2024-11-26T17:58:09.096Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-26T17:58:10.034Z] Copying: 515/1024 [MB] (257 MBps) [2024-11-26T17:58:10.969Z] Copying: 778/1024 [MB] (262 MBps) [2024-11-26T17:58:11.535Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:11:54.609 00:11:54.609 17:58:11 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:54.609 17:58:11 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:54.609 17:58:11 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:54.609 17:58:11 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:54.609 17:58:11 -- dd/common.sh@31 -- # xtrace_disable 00:11:54.609 17:58:11 -- common/autotest_common.sh@10 -- # set +x 00:11:54.609 { 00:11:54.609 "subsystems": [ 00:11:54.609 { 00:11:54.609 "subsystem": "bdev", 00:11:54.609 "config": [ 00:11:54.609 { 00:11:54.609 "params": { 00:11:54.609 "block_size": 512, 00:11:54.609 "num_blocks": 2097152, 00:11:54.609 "name": "malloc0" 00:11:54.609 }, 00:11:54.609 "method": "bdev_malloc_create" 00:11:54.609 }, 00:11:54.609 { 00:11:54.609 "params": { 00:11:54.610 "io_mechanism": "io_uring", 00:11:54.610 "filename": "/dev/nullb0", 00:11:54.610 "name": "null0" 00:11:54.610 }, 00:11:54.610 "method": "bdev_xnvme_create" 00:11:54.610 }, 00:11:54.610 { 00:11:54.610 "method": "bdev_wait_for_examine" 00:11:54.610 } 00:11:54.610 ] 00:11:54.610 } 00:11:54.610 ] 00:11:54.610 } 00:11:54.610 [2024-11-26 17:58:11.472516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:54.610 [2024-11-26 17:58:11.472641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78622 ] 00:11:54.867 [2024-11-26 17:58:11.622586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.867 [2024-11-26 17:58:11.662943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.238  [2024-11-26T17:58:14.097Z] Copying: 266/1024 [MB] (266 MBps) [2024-11-26T17:58:15.029Z] Copying: 517/1024 [MB] (251 MBps) [2024-11-26T17:58:15.976Z] Copying: 768/1024 [MB] (251 MBps) [2024-11-26T17:58:16.234Z] Copying: 1019/1024 [MB] (250 MBps) [2024-11-26T17:58:16.824Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:11:59.898 00:11:59.898 17:58:16 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:59.898 17:58:16 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:59.898 17:58:16 -- dd/common.sh@31 -- # xtrace_disable 00:11:59.898 17:58:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.898 { 00:11:59.898 "subsystems": [ 00:11:59.898 { 00:11:59.898 "subsystem": "bdev", 00:11:59.898 "config": [ 00:11:59.898 { 00:11:59.898 "params": { 00:11:59.898 "block_size": 512, 00:11:59.898 "num_blocks": 2097152, 00:11:59.898 "name": "malloc0" 00:11:59.898 }, 00:11:59.898 "method": "bdev_malloc_create" 00:11:59.898 }, 00:11:59.898 { 00:11:59.898 "params": { 00:11:59.898 "io_mechanism": "io_uring", 00:11:59.898 "filename": "/dev/nullb0", 00:11:59.898 "name": "null0" 00:11:59.898 }, 00:11:59.898 "method": "bdev_xnvme_create" 00:11:59.898 }, 00:11:59.898 { 00:11:59.898 "method": "bdev_wait_for_examine" 00:11:59.898 } 00:11:59.898 ] 00:11:59.898 } 00:11:59.898 ] 00:11:59.898 } 00:11:59.898 [2024-11-26 17:58:16.595945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:59.898 [2024-11-26 17:58:16.596052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78690 ] 00:11:59.898 [2024-11-26 17:58:16.744767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.898 [2024-11-26 17:58:16.785603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.288  [2024-11-26T17:58:19.148Z] Copying: 272/1024 [MB] (272 MBps) [2024-11-26T17:58:20.520Z] Copying: 545/1024 [MB] (272 MBps) [2024-11-26T17:58:21.086Z] Copying: 818/1024 [MB] (273 MBps) [2024-11-26T17:58:21.654Z] Copying: 1024/1024 [MB] (average 273 MBps) 00:12:04.728 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:04.728 17:58:21 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:04.728 00:12:04.728 real 0m20.207s 00:12:04.728 user 0m15.689s 00:12:04.728 sys 0m4.053s 00:12:04.728 17:58:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.728 17:58:21 -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 ************************************ 00:12:04.728 END TEST xnvme_to_malloc_dd_copy 00:12:04.728 ************************************ 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:04.728 17:58:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:04.728 17:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.728 17:58:21 -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 ************************************ 00:12:04.728 START TEST xnvme_bdevperf 00:12:04.728 ************************************ 00:12:04.728 17:58:21 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:04.728 17:58:21 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:04.728 17:58:21 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:04.728 17:58:21 -- dd/common.sh@191 -- # return 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@60 -- # local io 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:04.728 17:58:21 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:04.728 17:58:21 -- dd/common.sh@31 -- # xtrace_disable 00:12:04.728 17:58:21 -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 { 00:12:04.728 "subsystems": [ 00:12:04.728 { 00:12:04.728 "subsystem": "bdev", 00:12:04.728 "config": [ 00:12:04.728 { 00:12:04.728 "params": { 00:12:04.728 "io_mechanism": "libaio", 00:12:04.728 "filename": "/dev/nullb0", 00:12:04.728 "name": "null0" 00:12:04.728 }, 00:12:04.728 "method": "bdev_xnvme_create" 00:12:04.728 }, 00:12:04.728 { 00:12:04.728 "method": "bdev_wait_for_examine" 00:12:04.728 } 00:12:04.728 ] 00:12:04.728 } 00:12:04.728 ] 00:12:04.728 } 00:12:04.728 [2024-11-26 17:58:21.611010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:04.728 [2024-11-26 17:58:21.611135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78772 ] 00:12:04.986 [2024-11-26 17:58:21.762679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.986 [2024-11-26 17:58:21.802401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.986 Running I/O for 5 seconds... 00:12:10.263 00:12:10.263 Latency(us) 00:12:10.263 [2024-11-26T17:58:27.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.263 [2024-11-26T17:58:27.189Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:10.263 null0 : 5.00 156570.65 611.60 0.00 0.00 406.46 122.55 756.69 00:12:10.263 [2024-11-26T17:58:27.189Z] =================================================================================================================== 00:12:10.263 [2024-11-26T17:58:27.189Z] Total : 156570.65 611.60 0.00 0.00 406.46 122.55 756.69 00:12:10.263 17:58:27 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:10.263 17:58:27 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:10.263 17:58:27 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:10.263 17:58:27 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:10.263 17:58:27 -- dd/common.sh@31 -- # xtrace_disable 00:12:10.263 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:12:10.263 { 00:12:10.263 "subsystems": [ 00:12:10.263 { 00:12:10.263 "subsystem": "bdev", 00:12:10.263 "config": [ 00:12:10.263 { 00:12:10.263 "params": { 00:12:10.263 "io_mechanism": "io_uring", 00:12:10.263 "filename": "/dev/nullb0", 00:12:10.263 "name": "null0" 00:12:10.263 }, 00:12:10.263 "method": "bdev_xnvme_create" 00:12:10.263 }, 00:12:10.263 { 00:12:10.263 "method": "bdev_wait_for_examine" 00:12:10.263 } 00:12:10.263 ] 00:12:10.263 } 00:12:10.263 ] 00:12:10.263 } 00:12:10.521 [2024-11-26 17:58:27.211415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:10.521 [2024-11-26 17:58:27.211545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78846 ] 00:12:10.521 [2024-11-26 17:58:27.364235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.521 [2024-11-26 17:58:27.402205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.780 Running I/O for 5 seconds... 00:12:16.044 00:12:16.044 Latency(us) 00:12:16.044 [2024-11-26T17:58:32.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.044 [2024-11-26T17:58:32.970Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:16.044 null0 : 5.00 205131.08 801.29 0.00 0.00 309.73 196.58 421.11 00:12:16.044 [2024-11-26T17:58:32.970Z] =================================================================================================================== 00:12:16.044 [2024-11-26T17:58:32.970Z] Total : 205131.08 801.29 0.00 0.00 309.73 196.58 421.11 00:12:16.044 17:58:32 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:16.044 17:58:32 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:16.044 00:12:16.044 real 0m11.234s 00:12:16.044 user 0m7.805s 00:12:16.044 sys 0m3.229s 00:12:16.044 17:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.044 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 ************************************ 00:12:16.044 END TEST xnvme_bdevperf 00:12:16.044 ************************************ 00:12:16.044 00:12:16.044 real 0m31.819s 00:12:16.044 user 0m23.673s 00:12:16.044 sys 0m7.491s 00:12:16.044 17:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.044 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 ************************************ 00:12:16.044 END TEST nvme_xnvme 00:12:16.044 ************************************ 00:12:16.044 17:58:32 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:16.044 17:58:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.044 17:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.044 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 ************************************ 00:12:16.044 START TEST blockdev_xnvme 00:12:16.044 ************************************ 00:12:16.044 17:58:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:16.303 * Looking for test storage... 00:12:16.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:16.303 17:58:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:16.303 17:58:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:16.303 17:58:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:16.303 17:58:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:16.303 17:58:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:16.303 17:58:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:16.303 17:58:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:16.303 17:58:33 -- scripts/common.sh@335 -- # IFS=.-: 00:12:16.303 17:58:33 -- scripts/common.sh@335 -- # read -ra ver1 00:12:16.303 17:58:33 -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.303 17:58:33 -- scripts/common.sh@336 -- # read -ra ver2 00:12:16.303 17:58:33 -- scripts/common.sh@337 -- # local 'op=<' 00:12:16.303 17:58:33 -- scripts/common.sh@339 -- # ver1_l=2 00:12:16.303 17:58:33 -- scripts/common.sh@340 -- # ver2_l=1 00:12:16.303 17:58:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:16.303 17:58:33 -- scripts/common.sh@343 -- # case "$op" in 00:12:16.303 17:58:33 -- scripts/common.sh@344 -- # : 1 00:12:16.303 17:58:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:16.303 17:58:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.303 17:58:33 -- scripts/common.sh@364 -- # decimal 1 00:12:16.303 17:58:33 -- scripts/common.sh@352 -- # local d=1 00:12:16.303 17:58:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.303 17:58:33 -- scripts/common.sh@354 -- # echo 1 00:12:16.303 17:58:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:16.303 17:58:33 -- scripts/common.sh@365 -- # decimal 2 00:12:16.303 17:58:33 -- scripts/common.sh@352 -- # local d=2 00:12:16.303 17:58:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.303 17:58:33 -- scripts/common.sh@354 -- # echo 2 00:12:16.303 17:58:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:16.303 17:58:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:16.303 17:58:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:16.303 17:58:33 -- scripts/common.sh@367 -- # return 0 00:12:16.303 17:58:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.303 17:58:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:16.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.303 --rc genhtml_branch_coverage=1 00:12:16.303 --rc genhtml_function_coverage=1 00:12:16.303 --rc genhtml_legend=1 00:12:16.303 --rc geninfo_all_blocks=1 00:12:16.303 --rc geninfo_unexecuted_blocks=1 00:12:16.303 00:12:16.303 ' 00:12:16.303 17:58:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:16.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.303 --rc genhtml_branch_coverage=1 00:12:16.303 --rc genhtml_function_coverage=1 00:12:16.303 --rc genhtml_legend=1 00:12:16.303 --rc geninfo_all_blocks=1 00:12:16.303 --rc geninfo_unexecuted_blocks=1 00:12:16.303 00:12:16.303 ' 00:12:16.303 17:58:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:16.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.303 --rc genhtml_branch_coverage=1 00:12:16.303 --rc genhtml_function_coverage=1 00:12:16.303 --rc genhtml_legend=1 00:12:16.303 --rc geninfo_all_blocks=1 00:12:16.303 --rc geninfo_unexecuted_blocks=1 00:12:16.303 00:12:16.303 ' 00:12:16.303 17:58:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:16.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.303 --rc genhtml_branch_coverage=1 00:12:16.303 --rc genhtml_function_coverage=1 00:12:16.303 --rc genhtml_legend=1 00:12:16.303 --rc geninfo_all_blocks=1 00:12:16.303 --rc geninfo_unexecuted_blocks=1 00:12:16.303 00:12:16.303 ' 00:12:16.303 17:58:33 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:16.303 17:58:33 -- bdev/nbd_common.sh@6 -- # set -e 00:12:16.303 17:58:33 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:16.303 17:58:33 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:16.303 17:58:33 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:16.303 17:58:33 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:16.303 17:58:33 -- bdev/blockdev.sh@18 -- # : 00:12:16.303 17:58:33 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:16.303 17:58:33 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:16.303 17:58:33 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:16.303 17:58:33 -- bdev/blockdev.sh@672 -- # uname -s 00:12:16.303 17:58:33 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:16.303 17:58:33 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:16.303 17:58:33 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:16.303 17:58:33 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:16.303 17:58:33 -- bdev/blockdev.sh@682 -- # dek= 00:12:16.303 17:58:33 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:16.303 17:58:33 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:16.303 17:58:33 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:16.303 17:58:33 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:16.303 17:58:33 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:16.303 17:58:33 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:16.303 17:58:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=78982 00:12:16.303 17:58:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:16.303 17:58:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:16.303 17:58:33 -- bdev/blockdev.sh@47 -- # waitforlisten 78982 00:12:16.303 17:58:33 -- common/autotest_common.sh@829 -- # '[' -z 78982 ']' 00:12:16.303 17:58:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.303 17:58:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.303 17:58:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.303 17:58:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.303 17:58:33 -- common/autotest_common.sh@10 -- # set +x 00:12:16.303 [2024-11-26 17:58:33.183303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:16.303 [2024-11-26 17:58:33.183450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78982 ] 00:12:16.562 [2024-11-26 17:58:33.332402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.562 [2024-11-26 17:58:33.370518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:16.562 [2024-11-26 17:58:33.370720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.130 17:58:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.130 17:58:33 -- common/autotest_common.sh@862 -- # return 0 00:12:17.130 17:58:33 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:17.130 17:58:33 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:17.130 17:58:33 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:17.130 17:58:33 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:17.130 17:58:33 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:18.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.064 Waiting for block devices as requested 00:12:18.064 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.064 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.323 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.323 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.596 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:23.597 17:58:40 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:23.597 17:58:40 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:23.597 17:58:40 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:23.597 17:58:40 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:23.597 17:58:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:23.597 17:58:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:23.597 17:58:40 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:23.597 17:58:40 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 nvme0n1 00:12:23.597 nvme1n1 00:12:23.597 nvme1n2 00:12:23.597 nvme1n3 00:12:23.597 nvme2n1 00:12:23.597 nvme3n1 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@738 -- # cat 00:12:23.597 17:58:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.597 17:58:40 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:23.597 17:58:40 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:23.597 17:58:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.597 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:23.597 17:58:40 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:23.597 17:58:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.857 17:58:40 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:23.857 17:58:40 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:23.858 17:58:40 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "652e91e3-8209-472a-8b18-9223bb92ab6d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "652e91e3-8209-472a-8b18-9223bb92ab6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9eaf6466-cc97-4502-bb63-1d38ab56e4c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9eaf6466-cc97-4502-bb63-1d38ab56e4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "85f13663-a943-4209-8abf-885ea8394318"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85f13663-a943-4209-8abf-885ea8394318",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "95eb5822-0cd0-4dcc-b426-c88150579c27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95eb5822-0cd0-4dcc-b426-c88150579c27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2cd9ff7b-31f1-49d9-b4cd-7911228a26ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2cd9ff7b-31f1-49d9-b4cd-7911228a26ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cde6f43e-4c01-461c-aff5-dacc76cf55e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cde6f43e-4c01-461c-aff5-dacc76cf55e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:23.858 17:58:40 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:23.858 17:58:40 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:23.858 17:58:40 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:23.858 17:58:40 -- bdev/blockdev.sh@752 -- # killprocess 78982 00:12:23.858 17:58:40 -- common/autotest_common.sh@936 -- # '[' -z 78982 ']' 00:12:23.858 17:58:40 -- common/autotest_common.sh@940 -- # kill -0 78982 00:12:23.858 17:58:40 -- common/autotest_common.sh@941 -- # uname 00:12:23.858 17:58:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.858 17:58:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78982 00:12:23.858 17:58:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:23.858 17:58:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:23.858 killing process with pid 78982 00:12:23.858 17:58:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78982' 00:12:23.858 17:58:40 -- common/autotest_common.sh@955 -- # kill 78982 00:12:23.858 17:58:40 -- common/autotest_common.sh@960 -- # wait 78982 00:12:24.117 17:58:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:24.117 17:58:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:24.117 17:58:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:24.117 17:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.117 17:58:40 -- common/autotest_common.sh@10 -- # set +x 00:12:24.117 ************************************ 00:12:24.117 START TEST bdev_hello_world 00:12:24.117 ************************************ 00:12:24.117 17:58:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:24.375 [2024-11-26 17:58:41.066098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:24.375 [2024-11-26 17:58:41.066227] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79352 ] 00:12:24.375 [2024-11-26 17:58:41.219375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.375 [2024-11-26 17:58:41.263160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.635 [2024-11-26 17:58:41.441007] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:24.635 [2024-11-26 17:58:41.441063] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:24.635 [2024-11-26 17:58:41.441086] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:24.635 [2024-11-26 17:58:41.443233] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:24.635 [2024-11-26 17:58:41.443604] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:24.635 [2024-11-26 17:58:41.443630] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:24.635 [2024-11-26 17:58:41.443861] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:24.635 00:12:24.635 [2024-11-26 17:58:41.443880] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:24.894 00:12:24.894 real 0m0.675s 00:12:24.894 user 0m0.375s 00:12:24.894 sys 0m0.191s 00:12:24.894 17:58:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.894 17:58:41 -- common/autotest_common.sh@10 -- # set +x 00:12:24.894 ************************************ 00:12:24.894 END TEST bdev_hello_world 00:12:24.894 ************************************ 00:12:24.894 17:58:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:24.894 17:58:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.894 17:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.894 17:58:41 -- common/autotest_common.sh@10 -- # set +x 00:12:24.894 ************************************ 00:12:24.894 START TEST bdev_bounds 00:12:24.894 ************************************ 00:12:24.894 17:58:41 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:24.894 17:58:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=79383 00:12:24.894 17:58:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:24.894 17:58:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:24.894 Process bdevio pid: 79383 00:12:24.894 17:58:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 79383' 00:12:24.894 17:58:41 -- bdev/blockdev.sh@291 -- # waitforlisten 79383 00:12:24.894 17:58:41 -- common/autotest_common.sh@829 -- # '[' -z 79383 ']' 00:12:24.894 17:58:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.894 17:58:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.894 17:58:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.894 17:58:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.894 17:58:41 -- common/autotest_common.sh@10 -- # set +x 00:12:25.154 [2024-11-26 17:58:41.817639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:25.154 [2024-11-26 17:58:41.817790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79383 ] 00:12:25.154 [2024-11-26 17:58:41.970303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.154 [2024-11-26 17:58:42.012496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.154 [2024-11-26 17:58:42.012522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.154 [2024-11-26 17:58:42.012635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.723 17:58:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.723 17:58:42 -- common/autotest_common.sh@862 -- # return 0 00:12:25.723 17:58:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:25.982 I/O targets: 00:12:25.982 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:25.982 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:25.982 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:25.982 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:25.982 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:25.982 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:25.982 00:12:25.982 00:12:25.982 CUnit - A unit testing framework for C - Version 2.1-3 00:12:25.982 http://cunit.sourceforge.net/ 00:12:25.982 00:12:25.982 00:12:25.982 Suite: bdevio tests on: nvme3n1 00:12:25.982 Test: blockdev write read block ...passed 00:12:25.982 Test: blockdev write zeroes read block ...passed 00:12:25.982 Test: blockdev write zeroes read no split ...passed 00:12:25.982 Test: blockdev write zeroes read split ...passed 00:12:25.982 Test: blockdev write zeroes read split partial ...passed 00:12:25.982 Test: blockdev reset ...passed 00:12:25.982 Test: blockdev write read 8 blocks ...passed 00:12:25.982 Test: blockdev write read size > 128k ...passed 00:12:25.982 Test: blockdev write read invalid size ...passed 00:12:25.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.982 Test: blockdev write read max offset ...passed 00:12:25.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.982 Test: blockdev writev readv 8 blocks ...passed 00:12:25.982 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.982 Test: blockdev writev readv block ...passed 00:12:25.982 Test: blockdev writev readv size > 128k ...passed 00:12:25.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.982 Test: blockdev comparev and writev ...passed 00:12:25.982 Test: blockdev nvme passthru rw ...passed 00:12:25.982 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.982 Test: blockdev nvme admin passthru ...passed 00:12:25.982 Test: blockdev copy ...passed 00:12:25.982 Suite: bdevio tests on: nvme2n1 00:12:25.982 Test: blockdev write read block ...passed 00:12:25.982 Test: blockdev write zeroes read block ...passed 00:12:25.982 Test: blockdev write zeroes read no split ...passed 00:12:25.982 Test: blockdev write zeroes read split ...passed 00:12:25.982 Test: blockdev write zeroes read split partial ...passed 00:12:25.982 Test: blockdev reset ...passed 00:12:25.982 Test: blockdev write read 8 blocks ...passed 00:12:25.982 Test: blockdev write read size > 128k ...passed 00:12:25.982 Test: blockdev write read invalid size ...passed 00:12:25.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.982 Test: blockdev write read max offset ...passed 00:12:25.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.982 Test: blockdev writev readv 8 blocks ...passed 00:12:25.982 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.982 Test: blockdev writev readv block ...passed 00:12:25.982 Test: blockdev writev readv size > 128k ...passed 00:12:25.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.982 Test: blockdev comparev and writev ...passed 00:12:25.982 Test: blockdev nvme passthru rw ...passed 00:12:25.982 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.982 Test: blockdev nvme admin passthru ...passed 00:12:25.982 Test: blockdev copy ...passed 00:12:25.982 Suite: bdevio tests on: nvme1n3 00:12:25.982 Test: blockdev write read block ...passed 00:12:25.982 Test: blockdev write zeroes read block ...passed 00:12:25.982 Test: blockdev write zeroes read no split ...passed 00:12:25.982 Test: blockdev write zeroes read split ...passed 00:12:25.982 Test: blockdev write zeroes read split partial ...passed 00:12:25.982 Test: blockdev reset ...passed 00:12:25.982 Test: blockdev write read 8 blocks ...passed 00:12:25.982 Test: blockdev write read size > 128k ...passed 00:12:25.982 Test: blockdev write read invalid size ...passed 00:12:25.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.982 Test: blockdev write read max offset ...passed 00:12:25.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.982 Test: blockdev writev readv 8 blocks ...passed 00:12:25.982 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.982 Test: blockdev writev readv block ...passed 00:12:25.982 Test: blockdev writev readv size > 128k ...passed 00:12:25.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.982 Test: blockdev comparev and writev ...passed 00:12:25.982 Test: blockdev nvme passthru rw ...passed 00:12:25.982 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.982 Test: blockdev nvme admin passthru ...passed 00:12:25.982 Test: blockdev copy ...passed 00:12:25.982 Suite: bdevio tests on: nvme1n2 00:12:25.982 Test: blockdev write read block ...passed 00:12:25.982 Test: blockdev write zeroes read block ...passed 00:12:25.982 Test: blockdev write zeroes read no split ...passed 00:12:25.982 Test: blockdev write zeroes read split ...passed 00:12:25.983 Test: blockdev write zeroes read split partial ...passed 00:12:25.983 Test: blockdev reset ...passed 00:12:25.983 Test: blockdev write read 8 blocks ...passed 00:12:25.983 Test: blockdev write read size > 128k ...passed 00:12:25.983 Test: blockdev write read invalid size ...passed 00:12:25.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.983 Test: blockdev write read max offset ...passed 00:12:25.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.983 Test: blockdev writev readv 8 blocks ...passed 00:12:25.983 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.983 Test: blockdev writev readv block ...passed 00:12:25.983 Test: blockdev writev readv size > 128k ...passed 00:12:25.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.983 Test: blockdev comparev and writev ...passed 00:12:25.983 Test: blockdev nvme passthru rw ...passed 00:12:25.983 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.983 Test: blockdev nvme admin passthru ...passed 00:12:25.983 Test: blockdev copy ...passed 00:12:25.983 Suite: bdevio tests on: nvme1n1 00:12:25.983 Test: blockdev write read block ...passed 00:12:25.983 Test: blockdev write zeroes read block ...passed 00:12:25.983 Test: blockdev write zeroes read no split ...passed 00:12:25.983 Test: blockdev write zeroes read split ...passed 00:12:25.983 Test: blockdev write zeroes read split partial ...passed 00:12:25.983 Test: blockdev reset ...passed 00:12:25.983 Test: blockdev write read 8 blocks ...passed 00:12:25.983 Test: blockdev write read size > 128k ...passed 00:12:25.983 Test: blockdev write read invalid size ...passed 00:12:25.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.983 Test: blockdev write read max offset ...passed 00:12:25.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.983 Test: blockdev writev readv 8 blocks ...passed 00:12:25.983 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.983 Test: blockdev writev readv block ...passed 00:12:25.983 Test: blockdev writev readv size > 128k ...passed 00:12:25.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.983 Test: blockdev comparev and writev ...passed 00:12:25.983 Test: blockdev nvme passthru rw ...passed 00:12:25.983 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.983 Test: blockdev nvme admin passthru ...passed 00:12:25.983 Test: blockdev copy ...passed 00:12:25.983 Suite: bdevio tests on: nvme0n1 00:12:25.983 Test: blockdev write read block ...passed 00:12:25.983 Test: blockdev write zeroes read block ...passed 00:12:25.983 Test: blockdev write zeroes read no split ...passed 00:12:25.983 Test: blockdev write zeroes read split ...passed 00:12:25.983 Test: blockdev write zeroes read split partial ...passed 00:12:25.983 Test: blockdev reset ...passed 00:12:25.983 Test: blockdev write read 8 blocks ...passed 00:12:25.983 Test: blockdev write read size > 128k ...passed 00:12:25.983 Test: blockdev write read invalid size ...passed 00:12:25.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.983 Test: blockdev write read max offset ...passed 00:12:25.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.983 Test: blockdev writev readv 8 blocks ...passed 00:12:25.983 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.983 Test: blockdev writev readv block ...passed 00:12:25.983 Test: blockdev writev readv size > 128k ...passed 00:12:25.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.983 Test: blockdev comparev and writev ...passed 00:12:25.983 Test: blockdev nvme passthru rw ...passed 00:12:25.983 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.983 Test: blockdev nvme admin passthru ...passed 00:12:25.983 Test: blockdev copy ...passed 00:12:25.983 00:12:25.983 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.983 suites 6 6 n/a 0 0 00:12:25.983 tests 138 138 138 0 0 00:12:25.983 asserts 780 780 780 0 n/a 00:12:25.983 00:12:25.983 Elapsed time = 0.368 seconds 00:12:25.983 0 00:12:25.983 17:58:42 -- bdev/blockdev.sh@293 -- # killprocess 79383 00:12:25.983 17:58:42 -- common/autotest_common.sh@936 -- # '[' -z 79383 ']' 00:12:25.983 17:58:42 -- common/autotest_common.sh@940 -- # kill -0 79383 00:12:26.285 17:58:42 -- common/autotest_common.sh@941 -- # uname 00:12:26.285 17:58:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.285 17:58:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79383 00:12:26.285 17:58:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.285 17:58:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.285 killing process with pid 79383 00:12:26.285 17:58:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79383' 00:12:26.285 17:58:42 -- common/autotest_common.sh@955 -- # kill 79383 00:12:26.285 17:58:42 -- common/autotest_common.sh@960 -- # wait 79383 00:12:26.285 17:58:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:26.285 00:12:26.285 real 0m1.439s 00:12:26.285 user 0m3.451s 00:12:26.285 sys 0m0.387s 00:12:26.285 17:58:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:26.285 17:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:26.285 ************************************ 00:12:26.285 END TEST bdev_bounds 00:12:26.285 ************************************ 00:12:26.545 17:58:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:26.545 17:58:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:26.545 17:58:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.545 17:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:26.545 ************************************ 00:12:26.545 START TEST bdev_nbd 00:12:26.545 ************************************ 00:12:26.545 17:58:43 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:26.545 17:58:43 -- bdev/blockdev.sh@298 -- # uname -s 00:12:26.545 17:58:43 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:26.545 17:58:43 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.545 17:58:43 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.545 17:58:43 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:26.545 17:58:43 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:26.545 17:58:43 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:26.545 17:58:43 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:26.545 17:58:43 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:26.545 17:58:43 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:26.545 17:58:43 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:26.545 17:58:43 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:26.545 17:58:43 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:26.545 17:58:43 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:26.545 17:58:43 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:26.545 17:58:43 -- bdev/blockdev.sh@316 -- # nbd_pid=79427 00:12:26.545 17:58:43 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:26.545 17:58:43 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:26.545 17:58:43 -- bdev/blockdev.sh@318 -- # waitforlisten 79427 /var/tmp/spdk-nbd.sock 00:12:26.545 17:58:43 -- common/autotest_common.sh@829 -- # '[' -z 79427 ']' 00:12:26.545 17:58:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:26.545 17:58:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:26.545 17:58:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:26.545 17:58:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.545 17:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:26.545 [2024-11-26 17:58:43.339404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:26.545 [2024-11-26 17:58:43.339551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.804 [2024-11-26 17:58:43.480359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.804 [2024-11-26 17:58:43.521559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.373 17:58:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.373 17:58:44 -- common/autotest_common.sh@862 -- # return 0 00:12:27.373 17:58:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@24 -- # local i 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:27.373 17:58:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:27.632 17:58:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:27.632 17:58:44 -- common/autotest_common.sh@867 -- # local i 00:12:27.632 17:58:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.632 17:58:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.632 17:58:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:27.632 17:58:44 -- common/autotest_common.sh@871 -- # break 00:12:27.632 17:58:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.632 17:58:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.632 17:58:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.632 1+0 records in 00:12:27.632 1+0 records out 00:12:27.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639141 s, 6.4 MB/s 00:12:27.632 17:58:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.632 17:58:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.632 17:58:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.632 17:58:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.632 17:58:44 -- common/autotest_common.sh@887 -- # return 0 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:27.632 17:58:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:27.892 17:58:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:27.892 17:58:44 -- common/autotest_common.sh@867 -- # local i 00:12:27.892 17:58:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.892 17:58:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.892 17:58:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:27.892 17:58:44 -- common/autotest_common.sh@871 -- # break 00:12:27.892 17:58:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.892 17:58:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.892 17:58:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.892 1+0 records in 00:12:27.892 1+0 records out 00:12:27.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540789 s, 7.6 MB/s 00:12:27.892 17:58:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.892 17:58:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:27.892 17:58:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.892 17:58:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.892 17:58:44 -- common/autotest_common.sh@887 -- # return 0 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:27.892 17:58:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:28.151 17:58:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:28.151 17:58:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:28.151 17:58:44 -- common/autotest_common.sh@867 -- # local i 00:12:28.151 17:58:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.151 17:58:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.151 17:58:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:28.151 17:58:44 -- common/autotest_common.sh@871 -- # break 00:12:28.151 17:58:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.151 17:58:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.151 17:58:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.151 1+0 records in 00:12:28.151 1+0 records out 00:12:28.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731203 s, 5.6 MB/s 00:12:28.151 17:58:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.151 17:58:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.151 17:58:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.151 17:58:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.151 17:58:44 -- common/autotest_common.sh@887 -- # return 0 00:12:28.151 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.151 17:58:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.151 17:58:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:28.151 17:58:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:28.151 17:58:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:28.151 17:58:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:28.151 17:58:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:28.151 17:58:45 -- common/autotest_common.sh@867 -- # local i 00:12:28.151 17:58:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.151 17:58:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.151 17:58:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:28.151 17:58:45 -- common/autotest_common.sh@871 -- # break 00:12:28.151 17:58:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.151 17:58:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.151 17:58:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.151 1+0 records in 00:12:28.151 1+0 records out 00:12:28.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074545 s, 5.5 MB/s 00:12:28.151 17:58:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.151 17:58:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.151 17:58:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.410 17:58:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.410 17:58:45 -- common/autotest_common.sh@887 -- # return 0 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:28.410 17:58:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:28.410 17:58:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:28.410 17:58:45 -- common/autotest_common.sh@867 -- # local i 00:12:28.410 17:58:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.410 17:58:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.410 17:58:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:28.410 17:58:45 -- common/autotest_common.sh@871 -- # break 00:12:28.410 17:58:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.411 17:58:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.411 17:58:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.411 1+0 records in 00:12:28.411 1+0 records out 00:12:28.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725196 s, 5.6 MB/s 00:12:28.411 17:58:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.411 17:58:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.411 17:58:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.411 17:58:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.411 17:58:45 -- common/autotest_common.sh@887 -- # return 0 00:12:28.411 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.411 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.411 17:58:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:28.670 17:58:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:28.670 17:58:45 -- common/autotest_common.sh@867 -- # local i 00:12:28.670 17:58:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.670 17:58:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.670 17:58:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:28.670 17:58:45 -- common/autotest_common.sh@871 -- # break 00:12:28.670 17:58:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.670 17:58:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.670 17:58:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.670 1+0 records in 00:12:28.670 1+0 records out 00:12:28.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707445 s, 5.8 MB/s 00:12:28.670 17:58:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.670 17:58:45 -- common/autotest_common.sh@884 -- # size=4096 00:12:28.670 17:58:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.670 17:58:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.670 17:58:45 -- common/autotest_common.sh@887 -- # return 0 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.670 17:58:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd0", 00:12:28.929 "bdev_name": "nvme0n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd1", 00:12:28.929 "bdev_name": "nvme1n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd2", 00:12:28.929 "bdev_name": "nvme1n2" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd3", 00:12:28.929 "bdev_name": "nvme1n3" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd4", 00:12:28.929 "bdev_name": "nvme2n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd5", 00:12:28.929 "bdev_name": "nvme3n1" 00:12:28.929 } 00:12:28.929 ]' 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd0", 00:12:28.929 "bdev_name": "nvme0n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd1", 00:12:28.929 "bdev_name": "nvme1n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd2", 00:12:28.929 "bdev_name": "nvme1n2" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd3", 00:12:28.929 "bdev_name": "nvme1n3" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd4", 00:12:28.929 "bdev_name": "nvme2n1" 00:12:28.929 }, 00:12:28.929 { 00:12:28.929 "nbd_device": "/dev/nbd5", 00:12:28.929 "bdev_name": "nvme3n1" 00:12:28.929 } 00:12:28.929 ]' 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@51 -- # local i 00:12:28.929 17:58:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.930 17:58:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@41 -- # break 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.189 17:58:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@41 -- # break 00:12:29.447 17:58:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.448 17:58:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.448 17:58:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@41 -- # break 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@41 -- # break 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.706 17:58:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@41 -- # break 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.965 17:58:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@41 -- # break 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.223 17:58:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@65 -- # true 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@65 -- # count=0 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@122 -- # count=0 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@127 -- # return 0 00:12:30.483 17:58:47 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@12 -- # local i 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:30.483 17:58:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:30.743 /dev/nbd0 00:12:30.743 17:58:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.743 17:58:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.743 17:58:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:30.743 17:58:47 -- common/autotest_common.sh@867 -- # local i 00:12:30.743 17:58:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:30.743 17:58:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:30.743 17:58:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:30.743 17:58:47 -- common/autotest_common.sh@871 -- # break 00:12:30.743 17:58:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:30.743 17:58:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:30.743 17:58:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.743 1+0 records in 00:12:30.743 1+0 records out 00:12:30.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813844 s, 5.0 MB/s 00:12:30.743 17:58:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.743 17:58:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:30.743 17:58:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.743 17:58:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:30.743 17:58:47 -- common/autotest_common.sh@887 -- # return 0 00:12:30.743 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.743 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:30.743 17:58:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:31.002 /dev/nbd1 00:12:31.002 17:58:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.002 17:58:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.002 17:58:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:31.002 17:58:47 -- common/autotest_common.sh@867 -- # local i 00:12:31.002 17:58:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:31.002 17:58:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:31.002 17:58:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:31.002 17:58:47 -- common/autotest_common.sh@871 -- # break 00:12:31.002 17:58:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:31.002 17:58:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:31.002 17:58:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.002 1+0 records in 00:12:31.002 1+0 records out 00:12:31.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602021 s, 6.8 MB/s 00:12:31.002 17:58:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.002 17:58:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:31.002 17:58:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.002 17:58:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:31.002 17:58:47 -- common/autotest_common.sh@887 -- # return 0 00:12:31.002 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.002 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.002 17:58:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:31.288 /dev/nbd10 00:12:31.288 17:58:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:31.288 17:58:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:31.288 17:58:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:31.288 17:58:47 -- common/autotest_common.sh@867 -- # local i 00:12:31.288 17:58:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:31.288 17:58:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:31.288 17:58:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:31.288 17:58:47 -- common/autotest_common.sh@871 -- # break 00:12:31.288 17:58:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:31.288 17:58:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:31.288 17:58:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.288 1+0 records in 00:12:31.288 1+0 records out 00:12:31.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582096 s, 7.0 MB/s 00:12:31.288 17:58:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.288 17:58:47 -- common/autotest_common.sh@884 -- # size=4096 00:12:31.288 17:58:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.288 17:58:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:31.288 17:58:47 -- common/autotest_common.sh@887 -- # return 0 00:12:31.288 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.288 17:58:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.288 17:58:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:31.288 /dev/nbd11 00:12:31.288 17:58:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:31.288 17:58:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:31.288 17:58:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:31.288 17:58:48 -- common/autotest_common.sh@867 -- # local i 00:12:31.288 17:58:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:31.288 17:58:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:31.288 17:58:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:31.288 17:58:48 -- common/autotest_common.sh@871 -- # break 00:12:31.288 17:58:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:31.288 17:58:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:31.288 17:58:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.288 1+0 records in 00:12:31.288 1+0 records out 00:12:31.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00276617 s, 1.5 MB/s 00:12:31.288 17:58:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.548 17:58:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:31.548 17:58:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.548 17:58:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:31.548 17:58:48 -- common/autotest_common.sh@887 -- # return 0 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:31.548 /dev/nbd12 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:31.548 17:58:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:31.548 17:58:48 -- common/autotest_common.sh@867 -- # local i 00:12:31.548 17:58:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:31.548 17:58:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:31.548 17:58:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:31.548 17:58:48 -- common/autotest_common.sh@871 -- # break 00:12:31.548 17:58:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:31.548 17:58:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:31.548 17:58:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.548 1+0 records in 00:12:31.548 1+0 records out 00:12:31.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099406 s, 4.1 MB/s 00:12:31.548 17:58:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.548 17:58:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:31.548 17:58:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.548 17:58:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:31.548 17:58:48 -- common/autotest_common.sh@887 -- # return 0 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.548 17:58:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:31.809 /dev/nbd13 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:31.809 17:58:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:31.809 17:58:48 -- common/autotest_common.sh@867 -- # local i 00:12:31.809 17:58:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:31.809 17:58:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:31.809 17:58:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:31.809 17:58:48 -- common/autotest_common.sh@871 -- # break 00:12:31.809 17:58:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:31.809 17:58:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:31.809 17:58:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.809 1+0 records in 00:12:31.809 1+0 records out 00:12:31.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0041302 s, 992 kB/s 00:12:31.809 17:58:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.809 17:58:48 -- common/autotest_common.sh@884 -- # size=4096 00:12:31.809 17:58:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.809 17:58:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:31.809 17:58:48 -- common/autotest_common.sh@887 -- # return 0 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.809 17:58:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd0", 00:12:32.068 "bdev_name": "nvme0n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd1", 00:12:32.068 "bdev_name": "nvme1n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd10", 00:12:32.068 "bdev_name": "nvme1n2" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd11", 00:12:32.068 "bdev_name": "nvme1n3" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd12", 00:12:32.068 "bdev_name": "nvme2n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd13", 00:12:32.068 "bdev_name": "nvme3n1" 00:12:32.068 } 00:12:32.068 ]' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd0", 00:12:32.068 "bdev_name": "nvme0n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd1", 00:12:32.068 "bdev_name": "nvme1n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd10", 00:12:32.068 "bdev_name": "nvme1n2" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd11", 00:12:32.068 "bdev_name": "nvme1n3" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd12", 00:12:32.068 "bdev_name": "nvme2n1" 00:12:32.068 }, 00:12:32.068 { 00:12:32.068 "nbd_device": "/dev/nbd13", 00:12:32.068 "bdev_name": "nvme3n1" 00:12:32.068 } 00:12:32.068 ]' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:32.068 /dev/nbd1 00:12:32.068 /dev/nbd10 00:12:32.068 /dev/nbd11 00:12:32.068 /dev/nbd12 00:12:32.068 /dev/nbd13' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:32.068 /dev/nbd1 00:12:32.068 /dev/nbd10 00:12:32.068 /dev/nbd11 00:12:32.068 /dev/nbd12 00:12:32.068 /dev/nbd13' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@65 -- # count=6 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@95 -- # count=6 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:32.068 256+0 records in 00:12:32.068 256+0 records out 00:12:32.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122032 s, 85.9 MB/s 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.068 17:58:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:32.327 256+0 records in 00:12:32.327 256+0 records out 00:12:32.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124462 s, 8.4 MB/s 00:12:32.327 17:58:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.327 17:58:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:32.327 256+0 records in 00:12:32.327 256+0 records out 00:12:32.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123743 s, 8.5 MB/s 00:12:32.327 17:58:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.327 17:58:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:32.586 256+0 records in 00:12:32.586 256+0 records out 00:12:32.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125118 s, 8.4 MB/s 00:12:32.586 17:58:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.586 17:58:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:32.586 256+0 records in 00:12:32.586 256+0 records out 00:12:32.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126385 s, 8.3 MB/s 00:12:32.586 17:58:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.586 17:58:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:32.846 256+0 records in 00:12:32.846 256+0 records out 00:12:32.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145652 s, 7.2 MB/s 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:32.846 256+0 records in 00:12:32.846 256+0 records out 00:12:32.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129231 s, 8.1 MB/s 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.846 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.106 17:58:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:33.106 17:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@41 -- # break 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@41 -- # break 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.365 17:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@41 -- # break 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.625 17:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:33.884 17:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:33.884 17:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:33.884 17:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:33.884 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.884 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.885 17:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:33.885 17:58:50 -- bdev/nbd_common.sh@41 -- # break 00:12:33.885 17:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.885 17:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.885 17:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@41 -- # break 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.144 17:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@41 -- # break 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.144 17:58:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@65 -- # true 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@104 -- # count=0 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@109 -- # return 0 00:12:34.402 17:58:51 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:34.402 17:58:51 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:34.660 malloc_lvol_verify 00:12:34.660 17:58:51 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:34.919 69f420e1-10ed-44ac-a8d7-397efa88875f 00:12:34.919 17:58:51 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:35.177 e41241a6-987e-4d86-aa16-ea1a65391d8a 00:12:35.177 17:58:51 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:35.177 /dev/nbd0 00:12:35.177 17:58:52 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:35.177 mke2fs 1.47.0 (5-Feb-2023) 00:12:35.177 Discarding device blocks: 0/4096 done 00:12:35.177 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:35.177 00:12:35.177 Allocating group tables: 0/1 done 00:12:35.177 Writing inode tables: 0/1 done 00:12:35.177 Creating journal (1024 blocks): done 00:12:35.177 Writing superblocks and filesystem accounting information: 0/1 done 00:12:35.177 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@51 -- # local i 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@41 -- # break 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:35.437 17:58:52 -- bdev/nbd_common.sh@147 -- # return 0 00:12:35.437 17:58:52 -- bdev/blockdev.sh@324 -- # killprocess 79427 00:12:35.437 17:58:52 -- common/autotest_common.sh@936 -- # '[' -z 79427 ']' 00:12:35.437 17:58:52 -- common/autotest_common.sh@940 -- # kill -0 79427 00:12:35.437 17:58:52 -- common/autotest_common.sh@941 -- # uname 00:12:35.437 17:58:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.437 17:58:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79427 00:12:35.697 17:58:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:35.697 killing process with pid 79427 00:12:35.697 17:58:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:35.697 17:58:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79427' 00:12:35.697 17:58:52 -- common/autotest_common.sh@955 -- # kill 79427 00:12:35.697 17:58:52 -- common/autotest_common.sh@960 -- # wait 79427 00:12:35.697 17:58:52 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:35.697 00:12:35.697 real 0m9.371s 00:12:35.697 user 0m12.168s 00:12:35.697 sys 0m4.445s 00:12:35.697 17:58:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.697 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:35.697 ************************************ 00:12:35.697 END TEST bdev_nbd 00:12:35.697 ************************************ 00:12:35.957 17:58:52 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:35.957 17:58:52 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:35.957 17:58:52 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:35.957 17:58:52 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:35.957 17:58:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.957 17:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.957 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:35.957 ************************************ 00:12:35.957 START TEST bdev_fio 00:12:35.957 ************************************ 00:12:35.958 17:58:52 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@329 -- # local env_context 00:12:35.958 17:58:52 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:35.958 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:35.958 17:58:52 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:35.958 17:58:52 -- bdev/blockdev.sh@337 -- # echo '' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:35.958 17:58:52 -- bdev/blockdev.sh@337 -- # env_context= 00:12:35.958 17:58:52 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:35.958 17:58:52 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:35.958 17:58:52 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:35.958 17:58:52 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:35.958 17:58:52 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:35.958 17:58:52 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:35.958 17:58:52 -- common/autotest_common.sh@1290 -- # cat 00:12:35.958 17:58:52 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1303 -- # cat 00:12:35.958 17:58:52 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:35.958 17:58:52 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:35.958 17:58:52 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:35.958 17:58:52 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:35.958 17:58:52 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:35.958 17:58:52 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:35.958 17:58:52 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:35.958 17:58:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.958 17:58:52 -- common/autotest_common.sh@10 -- # set +x 00:12:35.958 ************************************ 00:12:35.958 START TEST bdev_fio_rw_verify 00:12:35.958 ************************************ 00:12:35.958 17:58:52 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:35.958 17:58:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:35.958 17:58:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:35.958 17:58:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:35.958 17:58:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:35.958 17:58:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.958 17:58:52 -- common/autotest_common.sh@1330 -- # shift 00:12:35.958 17:58:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:35.958 17:58:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:35.958 17:58:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.958 17:58:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:35.958 17:58:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:35.958 17:58:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:35.958 17:58:52 -- common/autotest_common.sh@1336 -- # break 00:12:35.958 17:58:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:35.958 17:58:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:36.217 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:36.217 fio-3.35 00:12:36.217 Starting 6 threads 00:12:48.422 00:12:48.422 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=79822: Tue Nov 26 17:59:03 2024 00:12:48.422 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(1289MiB/10001msec) 00:12:48.422 slat (usec): min=2, max=1565, avg= 6.75, stdev= 6.57 00:12:48.422 clat (usec): min=94, max=3863, avg=580.89, stdev=211.59 00:12:48.422 lat (usec): min=107, max=3885, avg=587.63, stdev=212.49 00:12:48.422 clat percentiles (usec): 00:12:48.422 | 50.000th=[ 611], 99.000th=[ 1156], 99.900th=[ 1811], 99.990th=[ 3654], 00:12:48.422 | 99.999th=[ 3851] 00:12:48.422 write: IOPS=33.4k, BW=131MiB/s (137MB/s)(1306MiB/10001msec); 0 zone resets 00:12:48.422 slat (usec): min=10, max=1669, avg=20.69, stdev=23.78 00:12:48.422 clat (usec): min=73, max=5601, avg=649.54, stdev=207.49 00:12:48.422 lat (usec): min=91, max=5619, avg=670.23, stdev=209.63 00:12:48.422 clat percentiles (usec): 00:12:48.422 | 50.000th=[ 668], 99.000th=[ 1254], 99.900th=[ 1811], 99.990th=[ 2540], 00:12:48.422 | 99.999th=[ 5538] 00:12:48.422 bw ( KiB/s): min=106919, max=148186, per=99.65%, avg=133243.95, stdev=2244.72, samples=114 00:12:48.422 iops : min=26729, max=37045, avg=33310.00, stdev=561.17, samples=114 00:12:48.422 lat (usec) : 100=0.01%, 250=4.99%, 500=20.78%, 750=54.06%, 1000=17.16% 00:12:48.422 lat (msec) : 2=2.95%, 4=0.06%, 10=0.01% 00:12:48.422 cpu : usr=60.11%, sys=27.77%, ctx=7771, majf=0, minf=29323 00:12:48.422 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.422 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.422 issued rwts: total=330034,334306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:48.422 00:12:48.422 Run status group 0 (all jobs): 00:12:48.422 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1289MiB (1352MB), run=10001-10001msec 00:12:48.422 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1306MiB (1369MB), run=10001-10001msec 00:12:48.422 ----------------------------------------------------- 00:12:48.422 Suppressions used: 00:12:48.422 count bytes template 00:12:48.422 6 48 /usr/src/fio/parse.c 00:12:48.422 4047 388512 /usr/src/fio/iolog.c 00:12:48.422 1 8 libtcmalloc_minimal.so 00:12:48.422 1 904 libcrypto.so 00:12:48.423 ----------------------------------------------------- 00:12:48.423 00:12:48.423 00:12:48.423 real 0m11.196s 00:12:48.423 user 0m36.901s 00:12:48.423 sys 0m17.110s 00:12:48.423 17:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:48.423 17:59:03 -- common/autotest_common.sh@10 -- # set +x 00:12:48.423 ************************************ 00:12:48.423 END TEST bdev_fio_rw_verify 00:12:48.423 ************************************ 00:12:48.423 17:59:04 -- bdev/blockdev.sh@348 -- # rm -f 00:12:48.423 17:59:04 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:48.423 17:59:04 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:48.423 17:59:04 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:48.423 17:59:04 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:48.423 17:59:04 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:48.423 17:59:04 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:48.423 17:59:04 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:48.423 17:59:04 -- common/autotest_common.sh@1290 -- # cat 00:12:48.423 17:59:04 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:48.423 17:59:04 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:48.423 17:59:04 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "652e91e3-8209-472a-8b18-9223bb92ab6d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "652e91e3-8209-472a-8b18-9223bb92ab6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9eaf6466-cc97-4502-bb63-1d38ab56e4c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9eaf6466-cc97-4502-bb63-1d38ab56e4c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "85f13663-a943-4209-8abf-885ea8394318"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85f13663-a943-4209-8abf-885ea8394318",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "95eb5822-0cd0-4dcc-b426-c88150579c27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95eb5822-0cd0-4dcc-b426-c88150579c27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2cd9ff7b-31f1-49d9-b4cd-7911228a26ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2cd9ff7b-31f1-49d9-b4cd-7911228a26ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cde6f43e-4c01-461c-aff5-dacc76cf55e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cde6f43e-4c01-461c-aff5-dacc76cf55e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:48.423 17:59:04 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:48.423 17:59:04 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:48.423 /home/vagrant/spdk_repo/spdk 00:12:48.423 17:59:04 -- bdev/blockdev.sh@360 -- # popd 00:12:48.423 17:59:04 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:48.423 17:59:04 -- bdev/blockdev.sh@362 -- # return 0 00:12:48.423 00:12:48.423 real 0m11.428s 00:12:48.423 user 0m37.017s 00:12:48.423 sys 0m17.233s 00:12:48.423 17:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:48.423 17:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.423 ************************************ 00:12:48.423 END TEST bdev_fio 00:12:48.423 ************************************ 00:12:48.423 17:59:04 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:48.423 17:59:04 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:48.423 17:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.423 17:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:48.423 ************************************ 00:12:48.423 START TEST bdev_verify 00:12:48.423 ************************************ 00:12:48.423 17:59:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:48.423 [2024-11-26 17:59:04.271342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:48.423 [2024-11-26 17:59:04.271646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79986 ] 00:12:48.423 [2024-11-26 17:59:04.424308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.423 [2024-11-26 17:59:04.473639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.423 [2024-11-26 17:59:04.473748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.423 Running I/O for 5 seconds... 00:12:53.691 00:12:53.691 Latency(us) 00:12:53.691 [2024-11-26T17:59:10.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.691 [2024-11-26T17:59:10.617Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.691 Verification LBA range: start 0x0 length 0x20000 00:12:53.691 nvme0n1 : 5.10 1795.64 7.01 0.00 0.00 70963.99 26003.84 105278.71 00:12:53.691 [2024-11-26T17:59:10.617Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.691 Verification LBA range: start 0x20000 length 0x20000 00:12:53.691 nvme0n1 : 5.11 1769.02 6.91 0.00 0.00 72208.73 6316.72 96014.19 00:12:53.691 [2024-11-26T17:59:10.617Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.691 Verification LBA range: start 0x0 length 0x80000 00:12:53.691 nvme1n1 : 5.08 1554.33 6.07 0.00 0.00 82084.61 11475.38 110332.09 00:12:53.691 [2024-11-26T17:59:10.617Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x80000 length 0x80000 00:12:53.692 nvme1n1 : 5.11 1721.80 6.73 0.00 0.00 73877.14 21897.97 103594.26 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x0 length 0x80000 00:12:53.692 nvme1n2 : 5.10 1588.42 6.20 0.00 0.00 80077.46 24319.38 117069.93 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x80000 length 0x80000 00:12:53.692 nvme1n2 : 5.12 1609.22 6.29 0.00 0.00 78973.01 14949.58 98540.88 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x0 length 0x80000 00:12:53.692 nvme1n3 : 5.11 1487.22 5.81 0.00 0.00 85404.85 13001.92 115385.47 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x80000 length 0x80000 00:12:53.692 nvme1n3 : 5.12 1651.11 6.45 0.00 0.00 76904.60 21687.42 112858.78 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x0 length 0xbd0bd 00:12:53.692 nvme2n1 : 5.09 1964.54 7.67 0.00 0.00 64746.89 7474.79 104857.60 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:53.692 nvme2n1 : 5.12 1932.51 7.55 0.00 0.00 65734.55 8106.46 89697.47 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0x0 length 0xa0000 00:12:53.692 nvme3n1 : 5.11 1579.97 6.17 0.00 0.00 80300.27 17055.15 115385.47 00:12:53.692 [2024-11-26T17:59:10.618Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:53.692 Verification LBA range: start 0xa0000 length 0xa0000 00:12:53.692 nvme3n1 : 5.13 1794.38 7.01 0.00 0.00 70598.99 15897.09 100646.45 00:12:53.692 [2024-11-26T17:59:10.618Z] =================================================================================================================== 00:12:53.692 [2024-11-26T17:59:10.618Z] Total : 20448.17 79.88 0.00 0.00 74626.25 6316.72 117069.93 00:12:53.692 00:12:53.692 real 0m5.888s 00:12:53.692 user 0m7.307s 00:12:53.692 sys 0m3.120s 00:12:53.692 17:59:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:53.692 17:59:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.692 ************************************ 00:12:53.692 END TEST bdev_verify 00:12:53.692 ************************************ 00:12:53.692 17:59:10 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.692 17:59:10 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:53.692 17:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.692 17:59:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.692 ************************************ 00:12:53.692 START TEST bdev_verify_big_io 00:12:53.692 ************************************ 00:12:53.692 17:59:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.692 [2024-11-26 17:59:10.230603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:53.692 [2024-11-26 17:59:10.230905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80080 ] 00:12:53.692 [2024-11-26 17:59:10.382592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.692 [2024-11-26 17:59:10.424744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.692 [2024-11-26 17:59:10.424848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.951 Running I/O for 5 seconds... 00:12:59.223 00:12:59.223 Latency(us) 00:12:59.223 [2024-11-26T17:59:16.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.223 [2024-11-26T17:59:16.149Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.223 Verification LBA range: start 0x0 length 0x2000 00:12:59.223 nvme0n1 : 5.40 338.45 21.15 0.00 0.00 371067.65 102330.91 397532.43 00:12:59.223 [2024-11-26T17:59:16.149Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.223 Verification LBA range: start 0x2000 length 0x2000 00:12:59.223 nvme0n1 : 5.38 340.17 21.26 0.00 0.00 368672.37 69483.95 402585.81 00:12:59.223 [2024-11-26T17:59:16.149Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.223 Verification LBA range: start 0x0 length 0x8000 00:12:59.223 nvme1n1 : 5.40 323.72 20.23 0.00 0.00 384008.93 68220.61 411008.10 00:12:59.223 [2024-11-26T17:59:16.149Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.223 Verification LBA range: start 0x8000 length 0x8000 00:12:59.224 nvme1n1 : 5.37 311.71 19.48 0.00 0.00 396440.12 82117.40 390794.59 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x0 length 0x8000 00:12:59.224 nvme1n2 : 5.40 353.43 22.09 0.00 0.00 346758.14 33689.19 426168.24 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x8000 length 0x8000 00:12:59.224 nvme1n2 : 5.38 310.16 19.38 0.00 0.00 397217.10 69483.95 419430.40 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x0 length 0x8000 00:12:59.224 nvme1n3 : 5.41 323.69 20.23 0.00 0.00 375364.40 37268.67 444697.29 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x8000 length 0x8000 00:12:59.224 nvme1n3 : 5.37 342.65 21.42 0.00 0.00 353139.37 117069.93 417745.94 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x0 length 0xbd0b 00:12:59.224 nvme2n1 : 5.41 416.78 26.05 0.00 0.00 289813.99 5263.94 429537.16 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:59.224 nvme2n1 : 5.39 418.15 26.13 0.00 0.00 289082.27 15897.09 367212.16 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0x0 length 0xa000 00:12:59.224 nvme3n1 : 5.41 322.92 20.18 0.00 0.00 368599.90 3526.84 395847.97 00:12:59.224 [2024-11-26T17:59:16.150Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.224 Verification LBA range: start 0xa000 length 0xa000 00:12:59.224 nvme3n1 : 5.39 339.10 21.19 0.00 0.00 354074.99 5948.25 416061.48 00:12:59.224 [2024-11-26T17:59:16.150Z] =================================================================================================================== 00:12:59.224 [2024-11-26T17:59:16.150Z] Total : 4140.93 258.81 0.00 0.00 354461.93 3526.84 444697.29 00:12:59.483 00:12:59.483 real 0m6.179s 00:12:59.483 user 0m10.905s 00:12:59.483 sys 0m0.727s 00:12:59.483 17:59:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.483 17:59:16 -- common/autotest_common.sh@10 -- # set +x 00:12:59.483 ************************************ 00:12:59.483 END TEST bdev_verify_big_io 00:12:59.483 ************************************ 00:12:59.483 17:59:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.483 17:59:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:59.483 17:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.483 17:59:16 -- common/autotest_common.sh@10 -- # set +x 00:12:59.483 ************************************ 00:12:59.483 START TEST bdev_write_zeroes 00:12:59.483 ************************************ 00:12:59.483 17:59:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.742 [2024-11-26 17:59:16.480398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:59.742 [2024-11-26 17:59:16.480656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80168 ] 00:12:59.742 [2024-11-26 17:59:16.628891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.999 [2024-11-26 17:59:16.670553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.999 Running I/O for 1 seconds... 00:13:01.391 00:13:01.391 Latency(us) 00:13:01.391 [2024-11-26T17:59:18.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme0n1 : 1.01 9924.33 38.77 0.00 0.00 12885.03 8474.94 22529.64 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme1n1 : 1.01 9904.02 38.69 0.00 0.00 12902.81 8738.13 22950.76 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme1n2 : 1.01 9884.78 38.61 0.00 0.00 12917.36 8580.22 23371.87 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme1n3 : 1.01 9869.23 38.55 0.00 0.00 12930.63 8632.85 23687.71 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme2n1 : 1.02 11157.84 43.59 0.00 0.00 11429.22 4158.51 19581.84 00:13:01.391 [2024-11-26T17:59:18.317Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:01.391 nvme3n1 : 1.03 9829.69 38.40 0.00 0.00 12916.98 5948.25 23582.43 00:13:01.391 [2024-11-26T17:59:18.317Z] =================================================================================================================== 00:13:01.391 [2024-11-26T17:59:18.317Z] Total : 60569.88 236.60 0.00 0.00 12635.28 4158.51 23687.71 00:13:01.391 ************************************ 00:13:01.391 END TEST bdev_write_zeroes 00:13:01.391 ************************************ 00:13:01.391 00:13:01.391 real 0m1.742s 00:13:01.391 user 0m1.007s 00:13:01.391 sys 0m0.552s 00:13:01.391 17:59:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.391 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:01.391 17:59:18 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.391 17:59:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:01.391 17:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.391 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:01.391 ************************************ 00:13:01.391 START TEST bdev_json_nonenclosed 00:13:01.391 ************************************ 00:13:01.391 17:59:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.391 [2024-11-26 17:59:18.289965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:01.391 [2024-11-26 17:59:18.290083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80204 ] 00:13:01.652 [2024-11-26 17:59:18.441129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.652 [2024-11-26 17:59:18.484493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.652 [2024-11-26 17:59:18.484681] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:01.652 [2024-11-26 17:59:18.484736] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.910 00:13:01.910 real 0m0.384s 00:13:01.910 user 0m0.162s 00:13:01.910 sys 0m0.119s 00:13:01.910 17:59:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.910 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:01.910 ************************************ 00:13:01.910 END TEST bdev_json_nonenclosed 00:13:01.910 ************************************ 00:13:01.910 17:59:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.910 17:59:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:01.910 17:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.910 17:59:18 -- common/autotest_common.sh@10 -- # set +x 00:13:01.910 ************************************ 00:13:01.910 START TEST bdev_json_nonarray 00:13:01.910 ************************************ 00:13:01.910 17:59:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.910 [2024-11-26 17:59:18.767985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:01.910 [2024-11-26 17:59:18.768145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80234 ] 00:13:02.169 [2024-11-26 17:59:18.927489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.169 [2024-11-26 17:59:18.969933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.169 [2024-11-26 17:59:18.970139] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:02.169 [2024-11-26 17:59:18.970166] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:02.169 00:13:02.169 real 0m0.414s 00:13:02.169 user 0m0.168s 00:13:02.169 sys 0m0.142s 00:13:02.169 ************************************ 00:13:02.169 END TEST bdev_json_nonarray 00:13:02.169 ************************************ 00:13:02.169 17:59:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.169 17:59:19 -- common/autotest_common.sh@10 -- # set +x 00:13:02.428 17:59:19 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:02.428 17:59:19 -- bdev/blockdev.sh@809 -- # cleanup 00:13:02.428 17:59:19 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:02.428 17:59:19 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:02.428 17:59:19 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:02.428 17:59:19 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:03.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.904 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.904 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.904 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.904 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.904 00:13:21.904 real 1m3.976s 00:13:21.904 user 1m22.158s 00:13:21.904 sys 0m51.000s 00:13:21.904 17:59:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:21.904 ************************************ 00:13:21.904 END TEST blockdev_xnvme 00:13:21.904 ************************************ 00:13:21.904 17:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:21.904 17:59:36 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:21.904 17:59:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.904 17:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.904 17:59:36 -- common/autotest_common.sh@10 -- # set +x 00:13:21.904 ************************************ 00:13:21.904 START TEST ublk 00:13:21.904 ************************************ 00:13:21.904 17:59:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:21.904 * Looking for test storage... 00:13:21.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:21.904 17:59:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:21.904 17:59:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:21.904 17:59:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:21.904 17:59:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:21.904 17:59:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:21.904 17:59:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:21.904 17:59:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:21.904 17:59:37 -- scripts/common.sh@335 -- # IFS=.-: 00:13:21.904 17:59:37 -- scripts/common.sh@335 -- # read -ra ver1 00:13:21.904 17:59:37 -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.904 17:59:37 -- scripts/common.sh@336 -- # read -ra ver2 00:13:21.904 17:59:37 -- scripts/common.sh@337 -- # local 'op=<' 00:13:21.904 17:59:37 -- scripts/common.sh@339 -- # ver1_l=2 00:13:21.904 17:59:37 -- scripts/common.sh@340 -- # ver2_l=1 00:13:21.904 17:59:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:21.904 17:59:37 -- scripts/common.sh@343 -- # case "$op" in 00:13:21.904 17:59:37 -- scripts/common.sh@344 -- # : 1 00:13:21.904 17:59:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:21.905 17:59:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.905 17:59:37 -- scripts/common.sh@364 -- # decimal 1 00:13:21.905 17:59:37 -- scripts/common.sh@352 -- # local d=1 00:13:21.905 17:59:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.905 17:59:37 -- scripts/common.sh@354 -- # echo 1 00:13:21.905 17:59:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:21.905 17:59:37 -- scripts/common.sh@365 -- # decimal 2 00:13:21.905 17:59:37 -- scripts/common.sh@352 -- # local d=2 00:13:21.905 17:59:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.905 17:59:37 -- scripts/common.sh@354 -- # echo 2 00:13:21.905 17:59:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:21.905 17:59:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:21.905 17:59:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:21.905 17:59:37 -- scripts/common.sh@367 -- # return 0 00:13:21.905 17:59:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.905 17:59:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:21.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.905 --rc genhtml_branch_coverage=1 00:13:21.905 --rc genhtml_function_coverage=1 00:13:21.905 --rc genhtml_legend=1 00:13:21.905 --rc geninfo_all_blocks=1 00:13:21.905 --rc geninfo_unexecuted_blocks=1 00:13:21.905 00:13:21.905 ' 00:13:21.905 17:59:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:21.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.905 --rc genhtml_branch_coverage=1 00:13:21.905 --rc genhtml_function_coverage=1 00:13:21.905 --rc genhtml_legend=1 00:13:21.905 --rc geninfo_all_blocks=1 00:13:21.905 --rc geninfo_unexecuted_blocks=1 00:13:21.905 00:13:21.905 ' 00:13:21.905 17:59:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:21.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.905 --rc genhtml_branch_coverage=1 00:13:21.905 --rc genhtml_function_coverage=1 00:13:21.905 --rc genhtml_legend=1 00:13:21.905 --rc geninfo_all_blocks=1 00:13:21.905 --rc geninfo_unexecuted_blocks=1 00:13:21.905 00:13:21.905 ' 00:13:21.905 17:59:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:21.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.905 --rc genhtml_branch_coverage=1 00:13:21.905 --rc genhtml_function_coverage=1 00:13:21.905 --rc genhtml_legend=1 00:13:21.905 --rc geninfo_all_blocks=1 00:13:21.905 --rc geninfo_unexecuted_blocks=1 00:13:21.905 00:13:21.905 ' 00:13:21.905 17:59:37 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:21.905 17:59:37 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:21.905 17:59:37 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:21.905 17:59:37 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:21.905 17:59:37 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:21.905 17:59:37 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:21.905 17:59:37 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:21.905 17:59:37 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:21.905 17:59:37 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:21.905 17:59:37 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:21.905 17:59:37 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:21.905 17:59:37 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:21.905 17:59:37 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:21.905 17:59:37 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:21.905 17:59:37 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:21.905 17:59:37 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:21.905 17:59:37 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:21.905 17:59:37 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:21.905 17:59:37 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:21.905 17:59:37 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:21.905 17:59:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.905 17:59:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.905 17:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:21.905 ************************************ 00:13:21.905 START TEST test_save_ublk_config 00:13:21.905 ************************************ 00:13:21.905 17:59:37 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:21.905 17:59:37 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:21.905 17:59:37 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:21.905 17:59:37 -- ublk/ublk.sh@103 -- # tgtpid=80634 00:13:21.905 17:59:37 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:21.905 17:59:37 -- ublk/ublk.sh@106 -- # waitforlisten 80634 00:13:21.905 17:59:37 -- common/autotest_common.sh@829 -- # '[' -z 80634 ']' 00:13:21.905 17:59:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.905 17:59:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.905 17:59:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.905 17:59:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.905 17:59:37 -- common/autotest_common.sh@10 -- # set +x 00:13:21.905 [2024-11-26 17:59:37.259290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:21.905 [2024-11-26 17:59:37.259617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80634 ] 00:13:21.905 [2024-11-26 17:59:37.410674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.905 [2024-11-26 17:59:37.461724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:21.905 [2024-11-26 17:59:37.462125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.905 17:59:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.905 17:59:38 -- common/autotest_common.sh@862 -- # return 0 00:13:21.905 17:59:38 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:21.905 17:59:38 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:21.905 17:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.905 17:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:21.905 [2024-11-26 17:59:38.064717] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:21.905 malloc0 00:13:21.905 [2024-11-26 17:59:38.095610] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:21.905 [2024-11-26 17:59:38.095705] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:21.905 [2024-11-26 17:59:38.095718] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:21.905 [2024-11-26 17:59:38.095729] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:21.905 [2024-11-26 17:59:38.103502] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:21.905 [2024-11-26 17:59:38.103538] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:21.905 [2024-11-26 17:59:38.111495] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:21.905 [2024-11-26 17:59:38.111593] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:21.905 [2024-11-26 17:59:38.135489] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:21.905 0 00:13:21.905 17:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.905 17:59:38 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:21.905 17:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.905 17:59:38 -- common/autotest_common.sh@10 -- # set +x 00:13:21.905 17:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.905 17:59:38 -- ublk/ublk.sh@115 -- # config='{ 00:13:21.905 "subsystems": [ 00:13:21.905 { 00:13:21.905 "subsystem": "iobuf", 00:13:21.905 "config": [ 00:13:21.905 { 00:13:21.905 "method": "iobuf_set_options", 00:13:21.905 "params": { 00:13:21.905 "small_pool_count": 8192, 00:13:21.905 "large_pool_count": 1024, 00:13:21.905 "small_bufsize": 8192, 00:13:21.905 "large_bufsize": 135168 00:13:21.905 } 00:13:21.905 } 00:13:21.905 ] 00:13:21.905 }, 00:13:21.905 { 00:13:21.905 "subsystem": "sock", 00:13:21.905 "config": [ 00:13:21.905 { 00:13:21.905 "method": "sock_impl_set_options", 00:13:21.905 "params": { 00:13:21.905 "impl_name": "posix", 00:13:21.905 "recv_buf_size": 2097152, 00:13:21.905 "send_buf_size": 2097152, 00:13:21.905 "enable_recv_pipe": true, 00:13:21.905 "enable_quickack": false, 00:13:21.905 "enable_placement_id": 0, 00:13:21.905 "enable_zerocopy_send_server": true, 00:13:21.905 "enable_zerocopy_send_client": false, 00:13:21.905 "zerocopy_threshold": 0, 00:13:21.905 "tls_version": 0, 00:13:21.905 "enable_ktls": false 00:13:21.905 } 00:13:21.905 }, 00:13:21.905 { 00:13:21.905 "method": "sock_impl_set_options", 00:13:21.905 "params": { 00:13:21.905 "impl_name": "ssl", 00:13:21.905 "recv_buf_size": 4096, 00:13:21.905 "send_buf_size": 4096, 00:13:21.905 "enable_recv_pipe": true, 00:13:21.905 "enable_quickack": false, 00:13:21.905 "enable_placement_id": 0, 00:13:21.905 "enable_zerocopy_send_server": true, 00:13:21.905 "enable_zerocopy_send_client": false, 00:13:21.905 "zerocopy_threshold": 0, 00:13:21.905 "tls_version": 0, 00:13:21.905 "enable_ktls": false 00:13:21.905 } 00:13:21.905 } 00:13:21.905 ] 00:13:21.905 }, 00:13:21.905 { 00:13:21.905 "subsystem": "vmd", 00:13:21.905 "config": [] 00:13:21.905 }, 00:13:21.905 { 00:13:21.905 "subsystem": "accel", 00:13:21.905 "config": [ 00:13:21.905 { 00:13:21.905 "method": "accel_set_options", 00:13:21.905 "params": { 00:13:21.905 "small_cache_size": 128, 00:13:21.905 "large_cache_size": 16, 00:13:21.905 "task_count": 2048, 00:13:21.905 "sequence_count": 2048, 00:13:21.905 "buf_count": 2048 00:13:21.905 } 00:13:21.905 } 00:13:21.905 ] 00:13:21.905 }, 00:13:21.905 { 00:13:21.905 "subsystem": "bdev", 00:13:21.905 "config": [ 00:13:21.905 { 00:13:21.905 "method": "bdev_set_options", 00:13:21.905 "params": { 00:13:21.905 "bdev_io_pool_size": 65535, 00:13:21.905 "bdev_io_cache_size": 256, 00:13:21.905 "bdev_auto_examine": true, 00:13:21.905 "iobuf_small_cache_size": 128, 00:13:21.905 "iobuf_large_cache_size": 16 00:13:21.905 } 00:13:21.905 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_raid_set_options", 00:13:21.906 "params": { 00:13:21.906 "process_window_size_kb": 1024 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_iscsi_set_options", 00:13:21.906 "params": { 00:13:21.906 "timeout_sec": 30 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_nvme_set_options", 00:13:21.906 "params": { 00:13:21.906 "action_on_timeout": "none", 00:13:21.906 "timeout_us": 0, 00:13:21.906 "timeout_admin_us": 0, 00:13:21.906 "keep_alive_timeout_ms": 10000, 00:13:21.906 "transport_retry_count": 4, 00:13:21.906 "arbitration_burst": 0, 00:13:21.906 "low_priority_weight": 0, 00:13:21.906 "medium_priority_weight": 0, 00:13:21.906 "high_priority_weight": 0, 00:13:21.906 "nvme_adminq_poll_period_us": 10000, 00:13:21.906 "nvme_ioq_poll_period_us": 0, 00:13:21.906 "io_queue_requests": 0, 00:13:21.906 "delay_cmd_submit": true, 00:13:21.906 "bdev_retry_count": 3, 00:13:21.906 "transport_ack_timeout": 0, 00:13:21.906 "ctrlr_loss_timeout_sec": 0, 00:13:21.906 "reconnect_delay_sec": 0, 00:13:21.906 "fast_io_fail_timeout_sec": 0, 00:13:21.906 "generate_uuids": false, 00:13:21.906 "transport_tos": 0, 00:13:21.906 "io_path_stat": false, 00:13:21.906 "allow_accel_sequence": false 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_nvme_set_hotplug", 00:13:21.906 "params": { 00:13:21.906 "period_us": 100000, 00:13:21.906 "enable": false 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_malloc_create", 00:13:21.906 "params": { 00:13:21.906 "name": "malloc0", 00:13:21.906 "num_blocks": 8192, 00:13:21.906 "block_size": 4096, 00:13:21.906 "physical_block_size": 4096, 00:13:21.906 "uuid": "021f639a-4caa-4775-bb69-d923723c705c", 00:13:21.906 "optimal_io_boundary": 0 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "bdev_wait_for_examine" 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "scsi", 00:13:21.906 "config": null 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "scheduler", 00:13:21.906 "config": [ 00:13:21.906 { 00:13:21.906 "method": "framework_set_scheduler", 00:13:21.906 "params": { 00:13:21.906 "name": "static" 00:13:21.906 } 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "vhost_scsi", 00:13:21.906 "config": [] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "vhost_blk", 00:13:21.906 "config": [] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "ublk", 00:13:21.906 "config": [ 00:13:21.906 { 00:13:21.906 "method": "ublk_create_target", 00:13:21.906 "params": { 00:13:21.906 "cpumask": "1" 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "ublk_start_disk", 00:13:21.906 "params": { 00:13:21.906 "bdev_name": "malloc0", 00:13:21.906 "ublk_id": 0, 00:13:21.906 "num_queues": 1, 00:13:21.906 "queue_depth": 128 00:13:21.906 } 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "nbd", 00:13:21.906 "config": [] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "nvmf", 00:13:21.906 "config": [ 00:13:21.906 { 00:13:21.906 "method": "nvmf_set_config", 00:13:21.906 "params": { 00:13:21.906 "discovery_filter": "match_any", 00:13:21.906 "admin_cmd_passthru": { 00:13:21.906 "identify_ctrlr": false 00:13:21.906 } 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "nvmf_set_max_subsystems", 00:13:21.906 "params": { 00:13:21.906 "max_subsystems": 1024 00:13:21.906 } 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "method": "nvmf_set_crdt", 00:13:21.906 "params": { 00:13:21.906 "crdt1": 0, 00:13:21.906 "crdt2": 0, 00:13:21.906 "crdt3": 0 00:13:21.906 } 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 }, 00:13:21.906 { 00:13:21.906 "subsystem": "iscsi", 00:13:21.906 "config": [ 00:13:21.906 { 00:13:21.906 "method": "iscsi_set_options", 00:13:21.906 "params": { 00:13:21.906 "node_base": "iqn.2016-06.io.spdk", 00:13:21.906 "max_sessions": 128, 00:13:21.906 "max_connections_per_session": 2, 00:13:21.906 "max_queue_depth": 64, 00:13:21.906 "default_time2wait": 2, 00:13:21.906 "default_time2retain": 20, 00:13:21.906 "first_burst_length": 8192, 00:13:21.906 "immediate_data": true, 00:13:21.906 "allow_duplicated_isid": false, 00:13:21.906 "error_recovery_level": 0, 00:13:21.906 "nop_timeout": 60, 00:13:21.906 "nop_in_interval": 30, 00:13:21.906 "disable_chap": false, 00:13:21.906 "require_chap": false, 00:13:21.906 "mutual_chap": false, 00:13:21.906 "chap_group": 0, 00:13:21.906 "max_large_datain_per_connection": 64, 00:13:21.906 "max_r2t_per_connection": 4, 00:13:21.906 "pdu_pool_size": 36864, 00:13:21.906 "immediate_data_pool_size": 16384, 00:13:21.906 "data_out_pool_size": 2048 00:13:21.906 } 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 } 00:13:21.906 ] 00:13:21.906 }' 00:13:21.906 17:59:38 -- ublk/ublk.sh@116 -- # killprocess 80634 00:13:21.906 17:59:38 -- common/autotest_common.sh@936 -- # '[' -z 80634 ']' 00:13:21.906 17:59:38 -- common/autotest_common.sh@940 -- # kill -0 80634 00:13:21.906 17:59:38 -- common/autotest_common.sh@941 -- # uname 00:13:21.906 17:59:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.906 17:59:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80634 00:13:21.906 17:59:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.906 killing process with pid 80634 00:13:21.906 17:59:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.906 17:59:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80634' 00:13:21.906 17:59:38 -- common/autotest_common.sh@955 -- # kill 80634 00:13:21.906 17:59:38 -- common/autotest_common.sh@960 -- # wait 80634 00:13:21.906 [2024-11-26 17:59:38.714315] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.906 [2024-11-26 17:59:38.746516] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.906 [2024-11-26 17:59:38.746643] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.906 [2024-11-26 17:59:38.755493] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.906 [2024-11-26 17:59:38.755557] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:21.906 [2024-11-26 17:59:38.755566] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:21.906 [2024-11-26 17:59:38.755596] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:21.906 [2024-11-26 17:59:38.755729] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:22.475 17:59:39 -- ublk/ublk.sh@119 -- # tgtpid=80672 00:13:22.475 17:59:39 -- ublk/ublk.sh@121 -- # waitforlisten 80672 00:13:22.475 17:59:39 -- common/autotest_common.sh@829 -- # '[' -z 80672 ']' 00:13:22.475 17:59:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.475 17:59:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.475 17:59:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.475 17:59:39 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:22.475 17:59:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.475 17:59:39 -- common/autotest_common.sh@10 -- # set +x 00:13:22.475 17:59:39 -- ublk/ublk.sh@118 -- # echo '{ 00:13:22.475 "subsystems": [ 00:13:22.475 { 00:13:22.475 "subsystem": "iobuf", 00:13:22.475 "config": [ 00:13:22.475 { 00:13:22.475 "method": "iobuf_set_options", 00:13:22.475 "params": { 00:13:22.475 "small_pool_count": 8192, 00:13:22.475 "large_pool_count": 1024, 00:13:22.475 "small_bufsize": 8192, 00:13:22.475 "large_bufsize": 135168 00:13:22.475 } 00:13:22.475 } 00:13:22.475 ] 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "subsystem": "sock", 00:13:22.475 "config": [ 00:13:22.475 { 00:13:22.475 "method": "sock_impl_set_options", 00:13:22.475 "params": { 00:13:22.475 "impl_name": "posix", 00:13:22.475 "recv_buf_size": 2097152, 00:13:22.475 "send_buf_size": 2097152, 00:13:22.475 "enable_recv_pipe": true, 00:13:22.475 "enable_quickack": false, 00:13:22.475 "enable_placement_id": 0, 00:13:22.475 "enable_zerocopy_send_server": true, 00:13:22.475 "enable_zerocopy_send_client": false, 00:13:22.475 "zerocopy_threshold": 0, 00:13:22.475 "tls_version": 0, 00:13:22.475 "enable_ktls": false 00:13:22.475 } 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "method": "sock_impl_set_options", 00:13:22.475 "params": { 00:13:22.475 "impl_name": "ssl", 00:13:22.475 "recv_buf_size": 4096, 00:13:22.475 "send_buf_size": 4096, 00:13:22.475 "enable_recv_pipe": true, 00:13:22.475 "enable_quickack": false, 00:13:22.475 "enable_placement_id": 0, 00:13:22.475 "enable_zerocopy_send_server": true, 00:13:22.475 "enable_zerocopy_send_client": false, 00:13:22.475 "zerocopy_threshold": 0, 00:13:22.475 "tls_version": 0, 00:13:22.475 "enable_ktls": false 00:13:22.475 } 00:13:22.475 } 00:13:22.475 ] 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "subsystem": "vmd", 00:13:22.475 "config": [] 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "subsystem": "accel", 00:13:22.475 "config": [ 00:13:22.475 { 00:13:22.475 "method": "accel_set_options", 00:13:22.475 "params": { 00:13:22.475 "small_cache_size": 128, 00:13:22.475 "large_cache_size": 16, 00:13:22.475 "task_count": 2048, 00:13:22.475 "sequence_count": 2048, 00:13:22.475 "buf_count": 2048 00:13:22.475 } 00:13:22.475 } 00:13:22.475 ] 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "subsystem": "bdev", 00:13:22.475 "config": [ 00:13:22.475 { 00:13:22.475 "method": "bdev_set_options", 00:13:22.475 "params": { 00:13:22.475 "bdev_io_pool_size": 65535, 00:13:22.475 "bdev_io_cache_size": 256, 00:13:22.475 "bdev_auto_examine": true, 00:13:22.475 "iobuf_small_cache_size": 128, 00:13:22.475 "iobuf_large_cache_size": 16 00:13:22.475 } 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "method": "bdev_raid_set_options", 00:13:22.475 "params": { 00:13:22.475 "process_window_size_kb": 1024 00:13:22.475 } 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "method": "bdev_iscsi_set_options", 00:13:22.475 "params": { 00:13:22.475 "timeout_sec": 30 00:13:22.475 } 00:13:22.475 }, 00:13:22.475 { 00:13:22.475 "method": "bdev_nvme_set_options", 00:13:22.475 "params": { 00:13:22.475 "action_on_timeout": "none", 00:13:22.475 "timeout_us": 0, 00:13:22.475 "timeout_admin_us": 0, 00:13:22.475 "keep_alive_timeout_ms": 10000, 00:13:22.475 "transport_retry_count": 4, 00:13:22.475 "arbitration_burst": 0, 00:13:22.475 "low_priority_weight": 0, 00:13:22.475 "medium_priority_weight": 0, 00:13:22.475 "high_priority_weight": 0, 00:13:22.475 "nvme_adminq_poll_period_us": 10000, 00:13:22.475 "nvme_ioq_poll_period_us": 0, 00:13:22.476 "io_queue_requests": 0, 00:13:22.476 "delay_cmd_submit": true, 00:13:22.476 "bdev_retry_count": 3, 00:13:22.476 "transport_ack_timeout": 0, 00:13:22.476 "ctrlr_loss_timeout_sec": 0, 00:13:22.476 "reconnect_delay_sec": 0, 00:13:22.476 "fast_io_fail_timeout_sec": 0, 00:13:22.476 "generate_uuids": false, 00:13:22.476 "transport_tos": 0, 00:13:22.476 "io_path_stat": false, 00:13:22.476 "allow_accel_sequence": false 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "bdev_nvme_set_hotplug", 00:13:22.476 "params": { 00:13:22.476 "period_us": 100000, 00:13:22.476 "enable": false 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "bdev_malloc_create", 00:13:22.476 "params": { 00:13:22.476 "name": "malloc0", 00:13:22.476 "num_blocks": 8192, 00:13:22.476 "block_size": 4096, 00:13:22.476 "physical_block_size": 4096, 00:13:22.476 "uuid": "021f639a-4caa-4775-bb69-d923723c705c", 00:13:22.476 "optimal_io_boundary": 0 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "bdev_wait_for_examine" 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "scsi", 00:13:22.476 "config": null 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "scheduler", 00:13:22.476 "config": [ 00:13:22.476 { 00:13:22.476 "method": "framework_set_scheduler", 00:13:22.476 "params": { 00:13:22.476 "name": "static" 00:13:22.476 } 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "vhost_scsi", 00:13:22.476 "config": [] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "vhost_blk", 00:13:22.476 "config": [] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "ublk", 00:13:22.476 "config": [ 00:13:22.476 { 00:13:22.476 "method": "ublk_create_target", 00:13:22.476 "params": { 00:13:22.476 "cpumask": "1" 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "ublk_start_disk", 00:13:22.476 "params": { 00:13:22.476 "bdev_name": "malloc0", 00:13:22.476 "ublk_id": 0, 00:13:22.476 "num_queues": 1, 00:13:22.476 "queue_depth": 128 00:13:22.476 } 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "nbd", 00:13:22.476 "config": [] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "nvmf", 00:13:22.476 "config": [ 00:13:22.476 { 00:13:22.476 "method": "nvmf_set_config", 00:13:22.476 "params": { 00:13:22.476 "discovery_filter": "match_any", 00:13:22.476 "admin_cmd_passthru": { 00:13:22.476 "identify_ctrlr": false 00:13:22.476 } 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "nvmf_set_max_subsystems", 00:13:22.476 "params": { 00:13:22.476 "max_subsystems": 1024 00:13:22.476 } 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "method": "nvmf_set_crdt", 00:13:22.476 "params": { 00:13:22.476 "crdt1": 0, 00:13:22.476 "crdt2": 0, 00:13:22.476 "crdt3": 0 00:13:22.476 } 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 }, 00:13:22.476 { 00:13:22.476 "subsystem": "iscsi", 00:13:22.476 "config": [ 00:13:22.476 { 00:13:22.476 "method": "iscsi_set_options", 00:13:22.476 "params": { 00:13:22.476 "node_base": "iqn.2016-06.io.spdk", 00:13:22.476 "max_sessions": 128, 00:13:22.476 "max_connections_per_session": 2, 00:13:22.476 "max_queue_depth": 64, 00:13:22.476 "default_time2wait": 2, 00:13:22.476 "default_time2retain": 20, 00:13:22.476 "first_burst_length": 8192, 00:13:22.476 "immediate_data": true, 00:13:22.476 "allow_duplicated_isid": false, 00:13:22.476 "error_recovery_level": 0, 00:13:22.476 "nop_timeout": 60, 00:13:22.476 "nop_in_interval": 30, 00:13:22.476 "disable_chap": false, 00:13:22.476 "require_chap": false, 00:13:22.476 "mutual_chap": false, 00:13:22.476 "chap_group": 0, 00:13:22.476 "max_large_datain_per_connection": 64, 00:13:22.476 "max_r2t_per_connection": 4, 00:13:22.476 "pdu_pool_size": 36864, 00:13:22.476 "immediate_data_pool_size": 16384, 00:13:22.476 "data_out_pool_size": 2048 00:13:22.476 } 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 } 00:13:22.476 ] 00:13:22.476 }' 00:13:22.476 [2024-11-26 17:59:39.296440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:22.476 [2024-11-26 17:59:39.296585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80672 ] 00:13:22.736 [2024-11-26 17:59:39.446123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.736 [2024-11-26 17:59:39.499753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.736 [2024-11-26 17:59:39.499989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.994 [2024-11-26 17:59:39.856837] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:22.994 [2024-11-26 17:59:39.864696] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:22.994 [2024-11-26 17:59:39.864811] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:22.994 [2024-11-26 17:59:39.864827] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:22.994 [2024-11-26 17:59:39.864838] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:22.994 [2024-11-26 17:59:39.873584] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:22.994 [2024-11-26 17:59:39.873629] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:22.994 [2024-11-26 17:59:39.880518] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:22.994 [2024-11-26 17:59:39.880647] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:22.994 [2024-11-26 17:59:39.897500] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:23.253 17:59:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.253 17:59:40 -- common/autotest_common.sh@862 -- # return 0 00:13:23.253 17:59:40 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:23.253 17:59:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.253 17:59:40 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:23.253 17:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.253 17:59:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.253 17:59:40 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:23.253 17:59:40 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:23.253 17:59:40 -- ublk/ublk.sh@125 -- # killprocess 80672 00:13:23.253 17:59:40 -- common/autotest_common.sh@936 -- # '[' -z 80672 ']' 00:13:23.253 17:59:40 -- common/autotest_common.sh@940 -- # kill -0 80672 00:13:23.253 17:59:40 -- common/autotest_common.sh@941 -- # uname 00:13:23.253 17:59:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.253 17:59:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80672 00:13:23.512 17:59:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.512 killing process with pid 80672 00:13:23.512 17:59:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.512 17:59:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80672' 00:13:23.512 17:59:40 -- common/autotest_common.sh@955 -- # kill 80672 00:13:23.512 17:59:40 -- common/autotest_common.sh@960 -- # wait 80672 00:13:23.770 [2024-11-26 17:59:40.458116] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:23.770 [2024-11-26 17:59:40.487546] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:23.770 [2024-11-26 17:59:40.487673] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:23.770 [2024-11-26 17:59:40.495483] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:23.770 [2024-11-26 17:59:40.495536] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:23.770 [2024-11-26 17:59:40.495545] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:23.770 [2024-11-26 17:59:40.495577] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:23.770 [2024-11-26 17:59:40.495711] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:24.029 17:59:40 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:24.029 00:13:24.029 real 0m3.781s 00:13:24.029 user 0m2.516s 00:13:24.029 sys 0m1.923s 00:13:24.029 17:59:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.029 17:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:24.029 ************************************ 00:13:24.029 END TEST test_save_ublk_config 00:13:24.029 ************************************ 00:13:24.288 17:59:40 -- ublk/ublk.sh@139 -- # spdk_pid=80728 00:13:24.288 17:59:40 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:24.288 17:59:40 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:24.288 17:59:40 -- ublk/ublk.sh@141 -- # waitforlisten 80728 00:13:24.288 17:59:40 -- common/autotest_common.sh@829 -- # '[' -z 80728 ']' 00:13:24.288 17:59:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.288 17:59:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.288 17:59:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.288 17:59:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.288 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:24.288 [2024-11-26 17:59:41.090610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:24.288 [2024-11-26 17:59:41.090737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80728 ] 00:13:24.546 [2024-11-26 17:59:41.240744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.546 [2024-11-26 17:59:41.282836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.546 [2024-11-26 17:59:41.283224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.546 [2024-11-26 17:59:41.283333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.119 17:59:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.119 17:59:41 -- common/autotest_common.sh@862 -- # return 0 00:13:25.119 17:59:41 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:25.119 17:59:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.119 17:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.119 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:25.119 ************************************ 00:13:25.119 START TEST test_create_ublk 00:13:25.119 ************************************ 00:13:25.119 17:59:41 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:25.119 17:59:41 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:25.119 17:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.119 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:25.119 [2024-11-26 17:59:41.938798] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:25.119 17:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.119 17:59:41 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:25.119 17:59:41 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:25.119 17:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.119 17:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:25.119 17:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.119 17:59:42 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:25.119 17:59:42 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:25.119 17:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.119 17:59:42 -- common/autotest_common.sh@10 -- # set +x 00:13:25.119 [2024-11-26 17:59:42.009617] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:25.119 [2024-11-26 17:59:42.010069] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:25.119 [2024-11-26 17:59:42.010095] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:25.119 [2024-11-26 17:59:42.010116] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.119 [2024-11-26 17:59:42.017826] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.119 [2024-11-26 17:59:42.017855] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.119 [2024-11-26 17:59:42.025485] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.119 [2024-11-26 17:59:42.033528] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:25.377 [2024-11-26 17:59:42.056496] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.377 17:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:25.377 17:59:42 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:25.377 17:59:42 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:25.377 17:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.377 17:59:42 -- common/autotest_common.sh@10 -- # set +x 00:13:25.377 17:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:25.377 { 00:13:25.377 "ublk_device": "/dev/ublkb0", 00:13:25.377 "id": 0, 00:13:25.377 "queue_depth": 512, 00:13:25.377 "num_queues": 4, 00:13:25.377 "bdev_name": "Malloc0" 00:13:25.377 } 00:13:25.377 ]' 00:13:25.377 17:59:42 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:25.377 17:59:42 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:25.377 17:59:42 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:25.377 17:59:42 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:25.377 17:59:42 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:25.377 17:59:42 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:25.377 17:59:42 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:25.377 17:59:42 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:25.377 17:59:42 -- lvol/common.sh@41 -- # local offset=0 00:13:25.377 17:59:42 -- lvol/common.sh@42 -- # local size=134217728 00:13:25.377 17:59:42 -- lvol/common.sh@43 -- # local rw=write 00:13:25.377 17:59:42 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:25.377 17:59:42 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:25.377 17:59:42 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:25.377 17:59:42 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:25.377 17:59:42 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:25.378 17:59:42 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:25.378 17:59:42 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:25.636 fio: verification read phase will never start because write phase uses all of runtime 00:13:25.636 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:25.636 fio-3.35 00:13:25.636 Starting 1 process 00:13:35.609 00:13:35.610 fio_test: (groupid=0, jobs=1): err= 0: pid=80767: Tue Nov 26 17:59:52 2024 00:13:35.610 write: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(615MiB/10001msec); 0 zone resets 00:13:35.610 clat (usec): min=38, max=4128, avg=62.72, stdev=99.46 00:13:35.610 lat (usec): min=38, max=4128, avg=63.18, stdev=99.47 00:13:35.610 clat percentiles (usec): 00:13:35.610 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:13:35.610 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 60], 00:13:35.610 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 67], 00:13:35.610 | 99.00th=[ 76], 99.50th=[ 83], 99.90th=[ 1991], 99.95th=[ 2868], 00:13:35.610 | 99.99th=[ 3654] 00:13:35.610 bw ( KiB/s): min=62048, max=72934, per=100.00%, avg=63083.68, stdev=2425.71, samples=19 00:13:35.610 iops : min=15512, max=18233, avg=15770.89, stdev=606.31, samples=19 00:13:35.610 lat (usec) : 50=3.54%, 100=96.25%, 250=0.02%, 500=0.01%, 750=0.01% 00:13:35.610 lat (usec) : 1000=0.01% 00:13:35.610 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:13:35.610 cpu : usr=3.28%, sys=9.94%, ctx=157418, majf=0, minf=794 00:13:35.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.610 issued rwts: total=0,157417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.610 00:13:35.610 Run status group 0 (all jobs): 00:13:35.610 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=615MiB (645MB), run=10001-10001msec 00:13:35.610 00:13:35.610 Disk stats (read/write): 00:13:35.610 ublkb0: ios=0/155802, merge=0/0, ticks=0/8656, in_queue=8657, util=99.13% 00:13:35.610 17:59:52 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:35.610 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.610 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 [2024-11-26 17:59:52.536641] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:35.868 [2024-11-26 17:59:52.570560] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:35.868 [2024-11-26 17:59:52.571265] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:35.868 [2024-11-26 17:59:52.578485] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:35.868 [2024-11-26 17:59:52.578776] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:35.868 [2024-11-26 17:59:52.578790] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:35.868 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.868 17:59:52 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:35.868 17:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:13:35.868 17:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:35.868 17:59:52 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:35.868 17:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.868 17:59:52 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:35.868 17:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.868 17:59:52 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:35.868 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.868 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 [2024-11-26 17:59:52.602577] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:35.868 request: 00:13:35.868 { 00:13:35.868 "ublk_id": 0, 00:13:35.868 "method": "ublk_stop_disk", 00:13:35.868 "req_id": 1 00:13:35.868 } 00:13:35.868 Got JSON-RPC error response 00:13:35.868 response: 00:13:35.868 { 00:13:35.868 "code": -19, 00:13:35.868 "message": "No such device" 00:13:35.868 } 00:13:35.868 17:59:52 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:35.868 17:59:52 -- common/autotest_common.sh@653 -- # es=1 00:13:35.868 17:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.868 17:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.868 17:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.868 17:59:52 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:35.868 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.868 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 [2024-11-26 17:59:52.618565] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:35.868 [2024-11-26 17:59:52.620574] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:35.868 [2024-11-26 17:59:52.620612] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:35.868 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.868 17:59:52 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:35.868 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.868 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.868 17:59:52 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:35.868 17:59:52 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:35.868 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.868 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.868 17:59:52 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:35.868 17:59:52 -- lvol/common.sh@26 -- # jq length 00:13:35.868 17:59:52 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:35.868 17:59:52 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:35.868 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.868 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.127 17:59:52 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:36.127 17:59:52 -- lvol/common.sh@28 -- # jq length 00:13:36.127 17:59:52 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:36.127 00:13:36.127 real 0m10.926s 00:13:36.127 user 0m0.709s 00:13:36.127 sys 0m1.107s 00:13:36.127 17:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.127 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 ************************************ 00:13:36.127 END TEST test_create_ublk 00:13:36.127 ************************************ 00:13:36.127 17:59:52 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:36.127 17:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.127 17:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.127 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 ************************************ 00:13:36.127 START TEST test_create_multi_ublk 00:13:36.127 ************************************ 00:13:36.127 17:59:52 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:36.127 17:59:52 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:36.127 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.127 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 [2024-11-26 17:59:52.937696] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:36.127 17:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.127 17:59:52 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:36.127 17:59:52 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:36.127 17:59:52 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.127 17:59:52 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:36.127 17:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.127 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.127 17:59:53 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:36.127 17:59:53 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:36.127 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.127 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.386 [2024-11-26 17:59:53.056632] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:36.386 [2024-11-26 17:59:53.057086] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:36.386 [2024-11-26 17:59:53.057102] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:36.386 [2024-11-26 17:59:53.057113] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.386 [2024-11-26 17:59:53.080484] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.386 [2024-11-26 17:59:53.080513] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.386 [2024-11-26 17:59:53.092495] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.386 [2024-11-26 17:59:53.093056] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:36.386 [2024-11-26 17:59:53.132487] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.386 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.386 17:59:53 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:36.386 17:59:53 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.386 17:59:53 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:36.386 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.386 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.386 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.386 17:59:53 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:36.386 17:59:53 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:36.386 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.386 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.386 [2024-11-26 17:59:53.252612] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:36.386 [2024-11-26 17:59:53.253053] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:36.386 [2024-11-26 17:59:53.253073] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:36.386 [2024-11-26 17:59:53.253081] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.386 [2024-11-26 17:59:53.264499] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.386 [2024-11-26 17:59:53.264523] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.386 [2024-11-26 17:59:53.276492] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.386 [2024-11-26 17:59:53.277094] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:36.645 [2024-11-26 17:59:53.312502] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.645 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.645 17:59:53 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:36.645 17:59:53 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.645 17:59:53 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:36.645 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.645 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.645 17:59:53 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:36.645 17:59:53 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:36.645 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.645 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 [2024-11-26 17:59:53.432628] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:36.645 [2024-11-26 17:59:53.433126] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:36.645 [2024-11-26 17:59:53.433143] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:36.645 [2024-11-26 17:59:53.433154] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.645 [2024-11-26 17:59:53.444486] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.645 [2024-11-26 17:59:53.444513] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.645 [2024-11-26 17:59:53.456488] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.645 [2024-11-26 17:59:53.457051] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:36.645 [2024-11-26 17:59:53.461944] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.645 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.645 17:59:53 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:36.645 17:59:53 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.645 17:59:53 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:36.645 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.645 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.645 17:59:53 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:36.645 17:59:53 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:36.645 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.645 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 [2024-11-26 17:59:53.568818] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:36.645 [2024-11-26 17:59:53.569283] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:36.645 [2024-11-26 17:59:53.569303] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:36.645 [2024-11-26 17:59:53.569312] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:36.904 [2024-11-26 17:59:53.581491] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:36.904 [2024-11-26 17:59:53.581514] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:36.904 [2024-11-26 17:59:53.592479] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:36.904 [2024-11-26 17:59:53.593040] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:36.904 [2024-11-26 17:59:53.601515] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:36.904 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:36.904 17:59:53 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:36.904 17:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.904 17:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 17:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:36.904 { 00:13:36.904 "ublk_device": "/dev/ublkb0", 00:13:36.904 "id": 0, 00:13:36.904 "queue_depth": 512, 00:13:36.904 "num_queues": 4, 00:13:36.904 "bdev_name": "Malloc0" 00:13:36.904 }, 00:13:36.904 { 00:13:36.904 "ublk_device": "/dev/ublkb1", 00:13:36.904 "id": 1, 00:13:36.904 "queue_depth": 512, 00:13:36.904 "num_queues": 4, 00:13:36.904 "bdev_name": "Malloc1" 00:13:36.904 }, 00:13:36.904 { 00:13:36.904 "ublk_device": "/dev/ublkb2", 00:13:36.904 "id": 2, 00:13:36.904 "queue_depth": 512, 00:13:36.904 "num_queues": 4, 00:13:36.904 "bdev_name": "Malloc2" 00:13:36.904 }, 00:13:36.904 { 00:13:36.904 "ublk_device": "/dev/ublkb3", 00:13:36.904 "id": 3, 00:13:36.904 "queue_depth": 512, 00:13:36.904 "num_queues": 4, 00:13:36.904 "bdev_name": "Malloc3" 00:13:36.904 } 00:13:36.904 ]' 00:13:36.904 17:59:53 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:36.904 17:59:53 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:36.904 17:59:53 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:36.904 17:59:53 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:36.904 17:59:53 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:36.904 17:59:53 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:36.904 17:59:53 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:36.904 17:59:53 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:37.162 17:59:53 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:37.162 17:59:53 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.162 17:59:53 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:37.162 17:59:53 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:37.162 17:59:53 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:37.162 17:59:53 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:37.162 17:59:53 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:37.162 17:59:53 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:37.162 17:59:53 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:37.162 17:59:54 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:37.162 17:59:54 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:37.162 17:59:54 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:37.162 17:59:54 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.162 17:59:54 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:37.421 17:59:54 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:37.421 17:59:54 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:37.421 17:59:54 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:37.421 17:59:54 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:37.421 17:59:54 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.421 17:59:54 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:37.421 17:59:54 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:37.421 17:59:54 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:37.421 17:59:54 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:37.680 17:59:54 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:37.680 17:59:54 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:37.680 17:59:54 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:37.680 17:59:54 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.680 17:59:54 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:37.680 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.680 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.680 [2024-11-26 17:59:54.440604] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:37.680 [2024-11-26 17:59:54.474888] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:37.680 [2024-11-26 17:59:54.475989] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:37.680 [2024-11-26 17:59:54.482497] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:37.680 [2024-11-26 17:59:54.482775] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:37.680 [2024-11-26 17:59:54.482791] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:37.680 17:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.680 17:59:54 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:37.680 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.680 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.680 [2024-11-26 17:59:54.498587] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:37.680 [2024-11-26 17:59:54.530912] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:37.680 [2024-11-26 17:59:54.531907] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:37.680 [2024-11-26 17:59:54.537548] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:37.680 [2024-11-26 17:59:54.537817] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:37.680 [2024-11-26 17:59:54.537833] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:37.680 17:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.680 17:59:54 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.680 17:59:54 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:37.680 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.680 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.680 [2024-11-26 17:59:54.554569] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:37.680 [2024-11-26 17:59:54.583911] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:37.680 [2024-11-26 17:59:54.584866] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:37.680 [2024-11-26 17:59:54.594502] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:37.680 [2024-11-26 17:59:54.594773] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:37.680 [2024-11-26 17:59:54.594787] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:37.938 17:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 17:59:54 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:37.938 17:59:54 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:37.938 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 [2024-11-26 17:59:54.610588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:37.938 [2024-11-26 17:59:54.644509] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:37.938 [2024-11-26 17:59:54.645237] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:37.938 [2024-11-26 17:59:54.653490] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:37.938 [2024-11-26 17:59:54.653775] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:37.938 [2024-11-26 17:59:54.653790] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:37.938 17:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 17:59:54 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:37.938 [2024-11-26 17:59:54.837589] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:37.938 [2024-11-26 17:59:54.838952] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:37.938 [2024-11-26 17:59:54.838995] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:38.197 17:59:54 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:38.197 17:59:54 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.197 17:59:54 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:38.197 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.197 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:38.197 17:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.197 17:59:54 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.197 17:59:54 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:38.197 17:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.197 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:38.197 17:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.197 17:59:55 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.197 17:59:55 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:38.197 17:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.197 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.456 17:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.456 17:59:55 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:38.456 17:59:55 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:38.456 17:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.456 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.456 17:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.456 17:59:55 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:38.456 17:59:55 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:38.456 17:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.456 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.456 17:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.456 17:59:55 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:38.456 17:59:55 -- lvol/common.sh@26 -- # jq length 00:13:38.456 17:59:55 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:38.456 17:59:55 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:38.456 17:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.456 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.456 17:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.456 17:59:55 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:38.456 17:59:55 -- lvol/common.sh@28 -- # jq length 00:13:38.456 ************************************ 00:13:38.456 END TEST test_create_multi_ublk 00:13:38.456 ************************************ 00:13:38.456 17:59:55 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:38.456 00:13:38.456 real 0m2.455s 00:13:38.456 user 0m0.959s 00:13:38.456 sys 0m0.213s 00:13:38.456 17:59:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:38.456 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.715 17:59:55 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:38.715 17:59:55 -- ublk/ublk.sh@147 -- # cleanup 00:13:38.715 17:59:55 -- ublk/ublk.sh@130 -- # killprocess 80728 00:13:38.715 17:59:55 -- common/autotest_common.sh@936 -- # '[' -z 80728 ']' 00:13:38.715 17:59:55 -- common/autotest_common.sh@940 -- # kill -0 80728 00:13:38.715 17:59:55 -- common/autotest_common.sh@941 -- # uname 00:13:38.715 17:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:38.715 17:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80728 00:13:38.715 killing process with pid 80728 00:13:38.715 17:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:38.715 17:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:38.715 17:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80728' 00:13:38.715 17:59:55 -- common/autotest_common.sh@955 -- # kill 80728 00:13:38.715 17:59:55 -- common/autotest_common.sh@960 -- # wait 80728 00:13:38.975 [2024-11-26 17:59:55.705579] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:38.975 [2024-11-26 17:59:55.705866] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:39.235 00:13:39.235 real 0m19.044s 00:13:39.235 user 0m29.305s 00:13:39.235 sys 0m8.256s 00:13:39.235 ************************************ 00:13:39.235 END TEST ublk 00:13:39.235 ************************************ 00:13:39.235 17:59:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.235 17:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:39.235 17:59:56 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:39.235 17:59:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.235 17:59:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.235 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:39.235 ************************************ 00:13:39.235 START TEST ublk_recovery 00:13:39.235 ************************************ 00:13:39.235 17:59:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:39.235 * Looking for test storage... 00:13:39.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:39.235 17:59:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:39.235 17:59:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:39.235 17:59:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:39.494 17:59:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:39.494 17:59:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:39.494 17:59:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:39.494 17:59:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:39.494 17:59:56 -- scripts/common.sh@335 -- # IFS=.-: 00:13:39.494 17:59:56 -- scripts/common.sh@335 -- # read -ra ver1 00:13:39.494 17:59:56 -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.494 17:59:56 -- scripts/common.sh@336 -- # read -ra ver2 00:13:39.494 17:59:56 -- scripts/common.sh@337 -- # local 'op=<' 00:13:39.494 17:59:56 -- scripts/common.sh@339 -- # ver1_l=2 00:13:39.494 17:59:56 -- scripts/common.sh@340 -- # ver2_l=1 00:13:39.494 17:59:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:39.494 17:59:56 -- scripts/common.sh@343 -- # case "$op" in 00:13:39.494 17:59:56 -- scripts/common.sh@344 -- # : 1 00:13:39.494 17:59:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:39.494 17:59:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.494 17:59:56 -- scripts/common.sh@364 -- # decimal 1 00:13:39.494 17:59:56 -- scripts/common.sh@352 -- # local d=1 00:13:39.494 17:59:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.494 17:59:56 -- scripts/common.sh@354 -- # echo 1 00:13:39.494 17:59:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:39.494 17:59:56 -- scripts/common.sh@365 -- # decimal 2 00:13:39.494 17:59:56 -- scripts/common.sh@352 -- # local d=2 00:13:39.494 17:59:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.494 17:59:56 -- scripts/common.sh@354 -- # echo 2 00:13:39.494 17:59:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:39.494 17:59:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:39.494 17:59:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:39.494 17:59:56 -- scripts/common.sh@367 -- # return 0 00:13:39.494 17:59:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.494 17:59:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:39.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.494 --rc genhtml_branch_coverage=1 00:13:39.494 --rc genhtml_function_coverage=1 00:13:39.494 --rc genhtml_legend=1 00:13:39.494 --rc geninfo_all_blocks=1 00:13:39.494 --rc geninfo_unexecuted_blocks=1 00:13:39.494 00:13:39.494 ' 00:13:39.494 17:59:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:39.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.495 --rc genhtml_branch_coverage=1 00:13:39.495 --rc genhtml_function_coverage=1 00:13:39.495 --rc genhtml_legend=1 00:13:39.495 --rc geninfo_all_blocks=1 00:13:39.495 --rc geninfo_unexecuted_blocks=1 00:13:39.495 00:13:39.495 ' 00:13:39.495 17:59:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:39.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.495 --rc genhtml_branch_coverage=1 00:13:39.495 --rc genhtml_function_coverage=1 00:13:39.495 --rc genhtml_legend=1 00:13:39.495 --rc geninfo_all_blocks=1 00:13:39.495 --rc geninfo_unexecuted_blocks=1 00:13:39.495 00:13:39.495 ' 00:13:39.495 17:59:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:39.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.495 --rc genhtml_branch_coverage=1 00:13:39.495 --rc genhtml_function_coverage=1 00:13:39.495 --rc genhtml_legend=1 00:13:39.495 --rc geninfo_all_blocks=1 00:13:39.495 --rc geninfo_unexecuted_blocks=1 00:13:39.495 00:13:39.495 ' 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:39.495 17:59:56 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:39.495 17:59:56 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:39.495 17:59:56 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:39.495 17:59:56 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:39.495 17:59:56 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:39.495 17:59:56 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:39.495 17:59:56 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:39.495 17:59:56 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=81102 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.495 17:59:56 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 81102 00:13:39.495 17:59:56 -- common/autotest_common.sh@829 -- # '[' -z 81102 ']' 00:13:39.495 17:59:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.495 17:59:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.495 17:59:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.495 17:59:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.495 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:39.495 [2024-11-26 17:59:56.341928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:39.495 [2024-11-26 17:59:56.342078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81102 ] 00:13:39.754 [2024-11-26 17:59:56.495510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.754 [2024-11-26 17:59:56.535115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.754 [2024-11-26 17:59:56.535552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.754 [2024-11-26 17:59:56.535618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.380 17:59:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.380 17:59:57 -- common/autotest_common.sh@862 -- # return 0 00:13:40.380 17:59:57 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:40.380 17:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.380 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:40.380 [2024-11-26 17:59:57.151796] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:40.380 17:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.380 17:59:57 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:40.380 17:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.380 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:40.380 malloc0 00:13:40.380 17:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.380 17:59:57 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:40.380 17:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.380 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:40.380 [2024-11-26 17:59:57.190885] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:40.380 [2024-11-26 17:59:57.191004] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:40.380 [2024-11-26 17:59:57.191015] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:40.380 [2024-11-26 17:59:57.191035] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:40.380 [2024-11-26 17:59:57.199628] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:40.380 [2024-11-26 17:59:57.199665] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:40.380 [2024-11-26 17:59:57.206481] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:40.380 [2024-11-26 17:59:57.206622] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:40.380 [2024-11-26 17:59:57.228486] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:40.380 1 00:13:40.380 17:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.380 17:59:57 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:41.755 17:59:58 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:41.755 17:59:58 -- ublk/ublk_recovery.sh@31 -- # fio_proc=81135 00:13:41.755 17:59:58 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:41.755 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.755 fio-3.35 00:13:41.755 Starting 1 process 00:13:47.022 18:00:03 -- ublk/ublk_recovery.sh@36 -- # kill -9 81102 00:13:47.022 18:00:03 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:52.292 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 81102 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:52.292 18:00:08 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=81241 00:13:52.292 18:00:08 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:52.292 18:00:08 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.292 18:00:08 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 81241 00:13:52.292 18:00:08 -- common/autotest_common.sh@829 -- # '[' -z 81241 ']' 00:13:52.292 18:00:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.292 18:00:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.292 18:00:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.292 18:00:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.292 18:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:52.292 [2024-11-26 18:00:08.346133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:52.292 [2024-11-26 18:00:08.346269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81241 ] 00:13:52.292 [2024-11-26 18:00:08.498132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.292 [2024-11-26 18:00:08.538708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.292 [2024-11-26 18:00:08.539101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.292 [2024-11-26 18:00:08.539140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.292 18:00:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.292 18:00:09 -- common/autotest_common.sh@862 -- # return 0 00:13:52.292 18:00:09 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:52.292 18:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.292 18:00:09 -- common/autotest_common.sh@10 -- # set +x 00:13:52.292 [2024-11-26 18:00:09.213852] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.292 18:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.292 18:00:09 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:52.292 18:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.292 18:00:09 -- common/autotest_common.sh@10 -- # set +x 00:13:52.566 malloc0 00:13:52.566 18:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.566 18:00:09 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:52.566 18:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.566 18:00:09 -- common/autotest_common.sh@10 -- # set +x 00:13:52.566 [2024-11-26 18:00:09.252886] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:52.566 [2024-11-26 18:00:09.252932] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:52.566 [2024-11-26 18:00:09.252954] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:52.566 [2024-11-26 18:00:09.260519] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:52.566 [2024-11-26 18:00:09.260546] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:52.566 [2024-11-26 18:00:09.260627] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:52.566 1 00:13:52.566 18:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.566 18:00:09 -- ublk/ublk_recovery.sh@52 -- # wait 81135 00:13:52.566 [2024-11-26 18:00:09.268480] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:52.566 [2024-11-26 18:00:09.275181] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:52.566 [2024-11-26 18:00:09.282663] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:52.566 [2024-11-26 18:00:09.282693] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:48.798 00:14:48.798 fio_test: (groupid=0, jobs=1): err= 0: pid=81139: Tue Nov 26 18:00:58 2024 00:14:48.798 read: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(5531MiB/60004msec) 00:14:48.798 slat (nsec): min=1916, max=400546, avg=6768.52, stdev=2034.71 00:14:48.798 clat (usec): min=1319, max=6044.6k, avg=2628.97, stdev=37651.44 00:14:48.798 lat (usec): min=1326, max=6044.6k, avg=2635.74, stdev=37651.45 00:14:48.798 clat percentiles (usec): 00:14:48.798 | 1.00th=[ 1876], 5.00th=[ 2073], 10.00th=[ 2114], 20.00th=[ 2180], 00:14:48.798 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:14:48.798 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2704], 95.00th=[ 3589], 00:14:48.798 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 6390], 99.95th=[ 7111], 00:14:48.798 | 99.99th=[12518] 00:14:48.798 bw ( KiB/s): min=19424, max=108560, per=100.00%, avg=104039.43, stdev=10983.64, samples=108 00:14:48.798 iops : min= 4856, max=27140, avg=26009.84, stdev=2745.91, samples=108 00:14:48.798 write: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(5525MiB/60004msec); 0 zone resets 00:14:48.798 slat (nsec): min=1886, max=2380.9k, avg=6777.62, stdev=2961.93 00:14:48.798 clat (usec): min=1325, max=6044.7k, avg=2783.40, stdev=43401.22 00:14:48.798 lat (usec): min=1331, max=6044.7k, avg=2790.18, stdev=43401.22 00:14:48.798 clat percentiles (usec): 00:14:48.798 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2278], 00:14:48.798 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2376], 00:14:48.798 | 70.00th=[ 2409], 80.00th=[ 2442], 90.00th=[ 2671], 95.00th=[ 3589], 00:14:48.798 | 99.00th=[ 4883], 99.50th=[ 5407], 99.90th=[ 6521], 99.95th=[ 7373], 00:14:48.798 | 99.99th=[ 9372] 00:14:48.798 bw ( KiB/s): min=20464, max=108136, per=100.00%, avg=103904.70, stdev=10819.91, samples=108 00:14:48.798 iops : min= 5116, max=27034, avg=25976.18, stdev=2704.98, samples=108 00:14:48.798 lat (msec) : 2=3.10%, 4=93.60%, 10=3.29%, 20=0.01%, >=2000=0.01% 00:14:48.798 cpu : usr=11.53%, sys=32.36%, ctx=120892, majf=0, minf=14 00:14:48.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:48.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:48.798 issued rwts: total=1416012,1414416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:48.798 00:14:48.798 Run status group 0 (all jobs): 00:14:48.798 READ: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=5531MiB (5800MB), run=60004-60004msec 00:14:48.798 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=5525MiB (5793MB), run=60004-60004msec 00:14:48.798 00:14:48.798 Disk stats (read/write): 00:14:48.798 ublkb1: ios=1413152/1411425, merge=0/0, ticks=3607349/3687397, in_queue=7294747, util=99.94% 00:14:48.798 18:00:58 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:48.798 18:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.798 18:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:48.798 [2024-11-26 18:00:58.528225] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:48.799 [2024-11-26 18:00:58.562499] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:48.799 [2024-11-26 18:00:58.562697] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:48.799 [2024-11-26 18:00:58.566632] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:48.799 [2024-11-26 18:00:58.566758] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:48.799 [2024-11-26 18:00:58.566775] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:48.799 18:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.799 18:00:58 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:48.799 18:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.799 18:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 [2024-11-26 18:00:58.585598] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:48.799 [2024-11-26 18:00:58.587382] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:48.799 [2024-11-26 18:00:58.587441] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:48.799 18:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.799 18:00:58 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:48.799 18:00:58 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:48.799 18:00:58 -- ublk/ublk_recovery.sh@14 -- # killprocess 81241 00:14:48.799 18:00:58 -- common/autotest_common.sh@936 -- # '[' -z 81241 ']' 00:14:48.799 18:00:58 -- common/autotest_common.sh@940 -- # kill -0 81241 00:14:48.799 18:00:58 -- common/autotest_common.sh@941 -- # uname 00:14:48.799 18:00:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.799 18:00:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81241 00:14:48.799 18:00:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.799 18:00:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.799 18:00:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81241' 00:14:48.799 killing process with pid 81241 00:14:48.799 18:00:58 -- common/autotest_common.sh@955 -- # kill 81241 00:14:48.799 18:00:58 -- common/autotest_common.sh@960 -- # wait 81241 00:14:48.799 [2024-11-26 18:00:58.921407] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:48.799 [2024-11-26 18:00:58.921470] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:48.799 00:14:48.799 real 1m3.374s 00:14:48.799 user 1m43.283s 00:14:48.799 sys 0m39.218s 00:14:48.799 18:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:48.799 18:00:59 -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 ************************************ 00:14:48.799 END TEST ublk_recovery 00:14:48.799 ************************************ 00:14:48.799 18:00:59 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:48.799 18:00:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.799 18:00:59 -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 18:00:59 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:48.799 18:00:59 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:48.799 18:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.799 18:00:59 -- common/autotest_common.sh@10 -- # set +x 00:14:48.799 ************************************ 00:14:48.799 START TEST ftl 00:14:48.799 ************************************ 00:14:48.799 18:00:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:48.799 * Looking for test storage... 00:14:48.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.799 18:00:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:48.799 18:00:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:48.799 18:00:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:48.799 18:00:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:48.799 18:00:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:48.799 18:00:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:48.799 18:00:59 -- scripts/common.sh@335 -- # IFS=.-: 00:14:48.799 18:00:59 -- scripts/common.sh@335 -- # read -ra ver1 00:14:48.799 18:00:59 -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.799 18:00:59 -- scripts/common.sh@336 -- # read -ra ver2 00:14:48.799 18:00:59 -- scripts/common.sh@337 -- # local 'op=<' 00:14:48.799 18:00:59 -- scripts/common.sh@339 -- # ver1_l=2 00:14:48.799 18:00:59 -- scripts/common.sh@340 -- # ver2_l=1 00:14:48.799 18:00:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:48.799 18:00:59 -- scripts/common.sh@343 -- # case "$op" in 00:14:48.799 18:00:59 -- scripts/common.sh@344 -- # : 1 00:14:48.799 18:00:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:48.799 18:00:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.799 18:00:59 -- scripts/common.sh@364 -- # decimal 1 00:14:48.799 18:00:59 -- scripts/common.sh@352 -- # local d=1 00:14:48.799 18:00:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.799 18:00:59 -- scripts/common.sh@354 -- # echo 1 00:14:48.799 18:00:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:48.799 18:00:59 -- scripts/common.sh@365 -- # decimal 2 00:14:48.799 18:00:59 -- scripts/common.sh@352 -- # local d=2 00:14:48.799 18:00:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.799 18:00:59 -- scripts/common.sh@354 -- # echo 2 00:14:48.799 18:00:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:48.799 18:00:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:48.799 18:00:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:48.799 18:00:59 -- scripts/common.sh@367 -- # return 0 00:14:48.799 18:00:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.799 --rc genhtml_branch_coverage=1 00:14:48.799 --rc genhtml_function_coverage=1 00:14:48.799 --rc genhtml_legend=1 00:14:48.799 --rc geninfo_all_blocks=1 00:14:48.799 --rc geninfo_unexecuted_blocks=1 00:14:48.799 00:14:48.799 ' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.799 --rc genhtml_branch_coverage=1 00:14:48.799 --rc genhtml_function_coverage=1 00:14:48.799 --rc genhtml_legend=1 00:14:48.799 --rc geninfo_all_blocks=1 00:14:48.799 --rc geninfo_unexecuted_blocks=1 00:14:48.799 00:14:48.799 ' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.799 --rc genhtml_branch_coverage=1 00:14:48.799 --rc genhtml_function_coverage=1 00:14:48.799 --rc genhtml_legend=1 00:14:48.799 --rc geninfo_all_blocks=1 00:14:48.799 --rc geninfo_unexecuted_blocks=1 00:14:48.799 00:14:48.799 ' 00:14:48.799 18:00:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.799 --rc genhtml_branch_coverage=1 00:14:48.799 --rc genhtml_function_coverage=1 00:14:48.799 --rc genhtml_legend=1 00:14:48.799 --rc geninfo_all_blocks=1 00:14:48.799 --rc geninfo_unexecuted_blocks=1 00:14:48.799 00:14:48.799 ' 00:14:48.799 18:00:59 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:48.799 18:00:59 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:48.799 18:00:59 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.799 18:00:59 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.799 18:00:59 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:48.799 18:00:59 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:48.799 18:00:59 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.799 18:00:59 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:48.799 18:00:59 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:48.799 18:00:59 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.799 18:00:59 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.799 18:00:59 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:48.799 18:00:59 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:48.799 18:00:59 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:48.799 18:00:59 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:48.799 18:00:59 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:48.799 18:00:59 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:48.799 18:00:59 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.799 18:00:59 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.799 18:00:59 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:48.799 18:00:59 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:48.799 18:00:59 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:48.799 18:00:59 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:48.799 18:00:59 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:48.799 18:00:59 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:48.799 18:00:59 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:48.800 18:00:59 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:48.800 18:00:59 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:48.800 18:00:59 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:48.800 18:00:59 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.800 18:00:59 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:48.800 18:00:59 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:48.800 18:00:59 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:48.800 18:00:59 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:48.800 18:00:59 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:48.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:48.800 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:48.800 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:48.800 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:48.800 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:48.800 18:01:00 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=82044 00:14:48.800 18:01:00 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:48.800 18:01:00 -- ftl/ftl.sh@38 -- # waitforlisten 82044 00:14:48.800 18:01:00 -- common/autotest_common.sh@829 -- # '[' -z 82044 ']' 00:14:48.800 18:01:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.800 18:01:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.800 18:01:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.800 18:01:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.800 18:01:00 -- common/autotest_common.sh@10 -- # set +x 00:14:48.800 [2024-11-26 18:01:00.678705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:48.800 [2024-11-26 18:01:00.678836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82044 ] 00:14:48.800 [2024-11-26 18:01:00.823709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.800 [2024-11-26 18:01:00.863129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.800 [2024-11-26 18:01:00.863314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.800 18:01:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.800 18:01:01 -- common/autotest_common.sh@862 -- # return 0 00:14:48.800 18:01:01 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:48.800 18:01:01 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:48.800 18:01:02 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:48.800 18:01:02 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:48.800 18:01:02 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:48.800 18:01:02 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:48.800 18:01:02 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:48.800 18:01:02 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:48.800 18:01:02 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:48.800 18:01:02 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:48.800 18:01:02 -- ftl/ftl.sh@50 -- # break 00:14:48.800 18:01:02 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:48.800 18:01:02 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:48.800 18:01:02 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:48.800 18:01:02 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:48.800 18:01:02 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:48.800 18:01:02 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:48.800 18:01:02 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:48.800 18:01:02 -- ftl/ftl.sh@63 -- # break 00:14:48.800 18:01:02 -- ftl/ftl.sh@66 -- # killprocess 82044 00:14:48.800 18:01:02 -- common/autotest_common.sh@936 -- # '[' -z 82044 ']' 00:14:48.800 18:01:02 -- common/autotest_common.sh@940 -- # kill -0 82044 00:14:48.800 18:01:02 -- common/autotest_common.sh@941 -- # uname 00:14:48.800 18:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.800 18:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82044 00:14:48.800 18:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.800 18:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.800 killing process with pid 82044 00:14:48.800 18:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82044' 00:14:48.800 18:01:02 -- common/autotest_common.sh@955 -- # kill 82044 00:14:48.800 18:01:02 -- common/autotest_common.sh@960 -- # wait 82044 00:14:48.800 18:01:03 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:48.800 18:01:03 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:48.800 18:01:03 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:48.800 18:01:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.800 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:14:48.800 ************************************ 00:14:48.800 START TEST ftl_fio_basic 00:14:48.800 ************************************ 00:14:48.800 18:01:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:48.800 * Looking for test storage... 00:14:48.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.800 18:01:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:48.800 18:01:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:48.800 18:01:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:48.800 18:01:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:48.800 18:01:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:48.800 18:01:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:48.800 18:01:03 -- scripts/common.sh@335 -- # IFS=.-: 00:14:48.800 18:01:03 -- scripts/common.sh@335 -- # read -ra ver1 00:14:48.800 18:01:03 -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.800 18:01:03 -- scripts/common.sh@336 -- # read -ra ver2 00:14:48.800 18:01:03 -- scripts/common.sh@337 -- # local 'op=<' 00:14:48.800 18:01:03 -- scripts/common.sh@339 -- # ver1_l=2 00:14:48.800 18:01:03 -- scripts/common.sh@340 -- # ver2_l=1 00:14:48.800 18:01:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:48.800 18:01:03 -- scripts/common.sh@343 -- # case "$op" in 00:14:48.800 18:01:03 -- scripts/common.sh@344 -- # : 1 00:14:48.800 18:01:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:48.800 18:01:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.800 18:01:03 -- scripts/common.sh@364 -- # decimal 1 00:14:48.800 18:01:03 -- scripts/common.sh@352 -- # local d=1 00:14:48.800 18:01:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.800 18:01:03 -- scripts/common.sh@354 -- # echo 1 00:14:48.800 18:01:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:48.800 18:01:03 -- scripts/common.sh@365 -- # decimal 2 00:14:48.800 18:01:03 -- scripts/common.sh@352 -- # local d=2 00:14:48.800 18:01:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.800 18:01:03 -- scripts/common.sh@354 -- # echo 2 00:14:48.800 18:01:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:48.800 18:01:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:48.800 18:01:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:48.800 18:01:03 -- scripts/common.sh@367 -- # return 0 00:14:48.800 18:01:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.800 --rc genhtml_branch_coverage=1 00:14:48.800 --rc genhtml_function_coverage=1 00:14:48.800 --rc genhtml_legend=1 00:14:48.800 --rc geninfo_all_blocks=1 00:14:48.800 --rc geninfo_unexecuted_blocks=1 00:14:48.800 00:14:48.800 ' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.800 --rc genhtml_branch_coverage=1 00:14:48.800 --rc genhtml_function_coverage=1 00:14:48.800 --rc genhtml_legend=1 00:14:48.800 --rc geninfo_all_blocks=1 00:14:48.800 --rc geninfo_unexecuted_blocks=1 00:14:48.800 00:14:48.800 ' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.800 --rc genhtml_branch_coverage=1 00:14:48.800 --rc genhtml_function_coverage=1 00:14:48.800 --rc genhtml_legend=1 00:14:48.800 --rc geninfo_all_blocks=1 00:14:48.800 --rc geninfo_unexecuted_blocks=1 00:14:48.800 00:14:48.800 ' 00:14:48.800 18:01:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.800 --rc genhtml_branch_coverage=1 00:14:48.800 --rc genhtml_function_coverage=1 00:14:48.800 --rc genhtml_legend=1 00:14:48.800 --rc geninfo_all_blocks=1 00:14:48.800 --rc geninfo_unexecuted_blocks=1 00:14:48.800 00:14:48.800 ' 00:14:48.800 18:01:03 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:48.800 18:01:03 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:48.800 18:01:03 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.800 18:01:03 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:48.800 18:01:03 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:48.800 18:01:03 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:48.800 18:01:03 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.800 18:01:03 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:48.800 18:01:03 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:48.800 18:01:03 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.800 18:01:03 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.800 18:01:03 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:48.800 18:01:03 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:48.801 18:01:03 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:48.801 18:01:03 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:48.801 18:01:03 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:48.801 18:01:03 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:48.801 18:01:03 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.801 18:01:03 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:48.801 18:01:03 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:48.801 18:01:03 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:48.801 18:01:03 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:48.801 18:01:03 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:48.801 18:01:03 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:48.801 18:01:03 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:48.801 18:01:03 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:48.801 18:01:03 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:48.801 18:01:03 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:48.801 18:01:03 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:48.801 18:01:03 -- ftl/fio.sh@11 -- # declare -A suite 00:14:48.801 18:01:03 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:48.801 18:01:03 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:48.801 18:01:03 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:48.801 18:01:03 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.801 18:01:03 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:48.801 18:01:03 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:48.801 18:01:03 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:48.801 18:01:03 -- ftl/fio.sh@26 -- # uuid= 00:14:48.801 18:01:03 -- ftl/fio.sh@27 -- # timeout=240 00:14:48.801 18:01:03 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:48.801 18:01:03 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:48.801 18:01:03 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:48.801 18:01:03 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:48.801 18:01:03 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:48.801 18:01:03 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:48.801 18:01:03 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:48.801 18:01:03 -- ftl/fio.sh@45 -- # svcpid=82159 00:14:48.801 18:01:03 -- ftl/fio.sh@46 -- # waitforlisten 82159 00:14:48.801 18:01:03 -- common/autotest_common.sh@829 -- # '[' -z 82159 ']' 00:14:48.801 18:01:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.801 18:01:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.801 18:01:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.801 18:01:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.801 18:01:03 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:48.801 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:14:48.801 [2024-11-26 18:01:03.691715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:48.801 [2024-11-26 18:01:03.691854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82159 ] 00:14:48.801 [2024-11-26 18:01:03.845579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:48.801 [2024-11-26 18:01:03.888744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.801 [2024-11-26 18:01:03.889144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.801 [2024-11-26 18:01:03.889234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.801 [2024-11-26 18:01:03.889344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.801 18:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.801 18:01:04 -- common/autotest_common.sh@862 -- # return 0 00:14:48.801 18:01:04 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:48.801 18:01:04 -- ftl/common.sh@54 -- # local name=nvme0 00:14:48.801 18:01:04 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:48.801 18:01:04 -- ftl/common.sh@56 -- # local size=103424 00:14:48.801 18:01:04 -- ftl/common.sh@59 -- # local base_bdev 00:14:48.801 18:01:04 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:48.801 18:01:04 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:48.801 18:01:04 -- ftl/common.sh@62 -- # local base_size 00:14:48.801 18:01:04 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:48.801 18:01:04 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:48.801 18:01:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:48.801 18:01:04 -- common/autotest_common.sh@1369 -- # local bs 00:14:48.801 18:01:04 -- common/autotest_common.sh@1370 -- # local nb 00:14:48.801 18:01:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:48.801 18:01:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:48.801 { 00:14:48.801 "name": "nvme0n1", 00:14:48.801 "aliases": [ 00:14:48.801 "a3461516-68c1-4069-b653-65dcf40fa053" 00:14:48.801 ], 00:14:48.801 "product_name": "NVMe disk", 00:14:48.801 "block_size": 4096, 00:14:48.801 "num_blocks": 1310720, 00:14:48.801 "uuid": "a3461516-68c1-4069-b653-65dcf40fa053", 00:14:48.801 "assigned_rate_limits": { 00:14:48.801 "rw_ios_per_sec": 0, 00:14:48.801 "rw_mbytes_per_sec": 0, 00:14:48.801 "r_mbytes_per_sec": 0, 00:14:48.801 "w_mbytes_per_sec": 0 00:14:48.801 }, 00:14:48.801 "claimed": false, 00:14:48.801 "zoned": false, 00:14:48.801 "supported_io_types": { 00:14:48.801 "read": true, 00:14:48.801 "write": true, 00:14:48.801 "unmap": true, 00:14:48.801 "write_zeroes": true, 00:14:48.801 "flush": true, 00:14:48.801 "reset": true, 00:14:48.801 "compare": true, 00:14:48.801 "compare_and_write": false, 00:14:48.801 "abort": true, 00:14:48.801 "nvme_admin": true, 00:14:48.801 "nvme_io": true 00:14:48.801 }, 00:14:48.801 "driver_specific": { 00:14:48.801 "nvme": [ 00:14:48.801 { 00:14:48.801 "pci_address": "0000:00:07.0", 00:14:48.801 "trid": { 00:14:48.801 "trtype": "PCIe", 00:14:48.801 "traddr": "0000:00:07.0" 00:14:48.801 }, 00:14:48.801 "ctrlr_data": { 00:14:48.801 "cntlid": 0, 00:14:48.801 "vendor_id": "0x1b36", 00:14:48.801 "model_number": "QEMU NVMe Ctrl", 00:14:48.801 "serial_number": "12341", 00:14:48.801 "firmware_revision": "8.0.0", 00:14:48.801 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:48.801 "oacs": { 00:14:48.801 "security": 0, 00:14:48.801 "format": 1, 00:14:48.801 "firmware": 0, 00:14:48.801 "ns_manage": 1 00:14:48.801 }, 00:14:48.801 "multi_ctrlr": false, 00:14:48.801 "ana_reporting": false 00:14:48.801 }, 00:14:48.801 "vs": { 00:14:48.801 "nvme_version": "1.4" 00:14:48.801 }, 00:14:48.801 "ns_data": { 00:14:48.801 "id": 1, 00:14:48.801 "can_share": false 00:14:48.801 } 00:14:48.801 } 00:14:48.801 ], 00:14:48.801 "mp_policy": "active_passive" 00:14:48.801 } 00:14:48.801 } 00:14:48.801 ]' 00:14:48.801 18:01:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:48.801 18:01:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:48.801 18:01:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:48.801 18:01:05 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:48.801 18:01:05 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:48.801 18:01:05 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:48.801 18:01:05 -- ftl/common.sh@63 -- # base_size=5120 00:14:48.801 18:01:05 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:48.801 18:01:05 -- ftl/common.sh@67 -- # clear_lvols 00:14:48.801 18:01:05 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:48.801 18:01:05 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:48.801 18:01:05 -- ftl/common.sh@28 -- # stores= 00:14:48.801 18:01:05 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:48.801 18:01:05 -- ftl/common.sh@68 -- # lvs=25a5a626-8e7e-4fa9-930b-224693f0b233 00:14:48.801 18:01:05 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 25a5a626-8e7e-4fa9-930b-224693f0b233 00:14:48.801 18:01:05 -- ftl/fio.sh@48 -- # split_bdev=2da4baf5-89ca-447b-997f-581cc389c012 00:14:48.801 18:01:05 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2da4baf5-89ca-447b-997f-581cc389c012 00:14:48.801 18:01:05 -- ftl/common.sh@35 -- # local name=nvc0 00:14:48.801 18:01:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:48.801 18:01:05 -- ftl/common.sh@37 -- # local base_bdev=2da4baf5-89ca-447b-997f-581cc389c012 00:14:48.801 18:01:05 -- ftl/common.sh@38 -- # local cache_size= 00:14:48.801 18:01:05 -- ftl/common.sh@41 -- # get_bdev_size 2da4baf5-89ca-447b-997f-581cc389c012 00:14:48.801 18:01:05 -- common/autotest_common.sh@1367 -- # local bdev_name=2da4baf5-89ca-447b-997f-581cc389c012 00:14:48.801 18:01:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:48.801 18:01:05 -- common/autotest_common.sh@1369 -- # local bs 00:14:48.801 18:01:05 -- common/autotest_common.sh@1370 -- # local nb 00:14:48.801 18:01:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.060 18:01:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:49.060 { 00:14:49.060 "name": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:49.060 "aliases": [ 00:14:49.060 "lvs/nvme0n1p0" 00:14:49.060 ], 00:14:49.060 "product_name": "Logical Volume", 00:14:49.060 "block_size": 4096, 00:14:49.060 "num_blocks": 26476544, 00:14:49.060 "uuid": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:49.060 "assigned_rate_limits": { 00:14:49.060 "rw_ios_per_sec": 0, 00:14:49.060 "rw_mbytes_per_sec": 0, 00:14:49.060 "r_mbytes_per_sec": 0, 00:14:49.060 "w_mbytes_per_sec": 0 00:14:49.060 }, 00:14:49.060 "claimed": false, 00:14:49.060 "zoned": false, 00:14:49.060 "supported_io_types": { 00:14:49.060 "read": true, 00:14:49.060 "write": true, 00:14:49.060 "unmap": true, 00:14:49.060 "write_zeroes": true, 00:14:49.060 "flush": false, 00:14:49.060 "reset": true, 00:14:49.060 "compare": false, 00:14:49.060 "compare_and_write": false, 00:14:49.060 "abort": false, 00:14:49.060 "nvme_admin": false, 00:14:49.060 "nvme_io": false 00:14:49.060 }, 00:14:49.060 "driver_specific": { 00:14:49.060 "lvol": { 00:14:49.060 "lvol_store_uuid": "25a5a626-8e7e-4fa9-930b-224693f0b233", 00:14:49.060 "base_bdev": "nvme0n1", 00:14:49.060 "thin_provision": true, 00:14:49.060 "snapshot": false, 00:14:49.060 "clone": false, 00:14:49.060 "esnap_clone": false 00:14:49.060 } 00:14:49.060 } 00:14:49.060 } 00:14:49.060 ]' 00:14:49.060 18:01:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:49.060 18:01:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:49.060 18:01:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:49.060 18:01:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:49.060 18:01:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:49.060 18:01:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:49.060 18:01:05 -- ftl/common.sh@41 -- # local base_size=5171 00:14:49.060 18:01:05 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:49.060 18:01:05 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:49.320 18:01:06 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:49.320 18:01:06 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:49.320 18:01:06 -- ftl/common.sh@48 -- # get_bdev_size 2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.320 18:01:06 -- common/autotest_common.sh@1367 -- # local bdev_name=2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.320 18:01:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:49.320 18:01:06 -- common/autotest_common.sh@1369 -- # local bs 00:14:49.320 18:01:06 -- common/autotest_common.sh@1370 -- # local nb 00:14:49.320 18:01:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.578 18:01:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:49.578 { 00:14:49.578 "name": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:49.578 "aliases": [ 00:14:49.578 "lvs/nvme0n1p0" 00:14:49.578 ], 00:14:49.578 "product_name": "Logical Volume", 00:14:49.578 "block_size": 4096, 00:14:49.578 "num_blocks": 26476544, 00:14:49.578 "uuid": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:49.578 "assigned_rate_limits": { 00:14:49.578 "rw_ios_per_sec": 0, 00:14:49.578 "rw_mbytes_per_sec": 0, 00:14:49.578 "r_mbytes_per_sec": 0, 00:14:49.578 "w_mbytes_per_sec": 0 00:14:49.578 }, 00:14:49.578 "claimed": false, 00:14:49.578 "zoned": false, 00:14:49.578 "supported_io_types": { 00:14:49.578 "read": true, 00:14:49.578 "write": true, 00:14:49.578 "unmap": true, 00:14:49.578 "write_zeroes": true, 00:14:49.578 "flush": false, 00:14:49.578 "reset": true, 00:14:49.578 "compare": false, 00:14:49.578 "compare_and_write": false, 00:14:49.578 "abort": false, 00:14:49.578 "nvme_admin": false, 00:14:49.578 "nvme_io": false 00:14:49.578 }, 00:14:49.578 "driver_specific": { 00:14:49.578 "lvol": { 00:14:49.578 "lvol_store_uuid": "25a5a626-8e7e-4fa9-930b-224693f0b233", 00:14:49.578 "base_bdev": "nvme0n1", 00:14:49.578 "thin_provision": true, 00:14:49.578 "snapshot": false, 00:14:49.578 "clone": false, 00:14:49.578 "esnap_clone": false 00:14:49.578 } 00:14:49.578 } 00:14:49.578 } 00:14:49.578 ]' 00:14:49.578 18:01:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:49.578 18:01:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:49.578 18:01:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:49.578 18:01:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:49.578 18:01:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:49.578 18:01:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:49.578 18:01:06 -- ftl/common.sh@48 -- # cache_size=5171 00:14:49.579 18:01:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:49.838 18:01:06 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:49.838 18:01:06 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:49.838 18:01:06 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:49.838 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:49.838 18:01:06 -- ftl/fio.sh@56 -- # get_bdev_size 2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.838 18:01:06 -- common/autotest_common.sh@1367 -- # local bdev_name=2da4baf5-89ca-447b-997f-581cc389c012 00:14:49.838 18:01:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:49.838 18:01:06 -- common/autotest_common.sh@1369 -- # local bs 00:14:49.838 18:01:06 -- common/autotest_common.sh@1370 -- # local nb 00:14:49.838 18:01:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2da4baf5-89ca-447b-997f-581cc389c012 00:14:50.097 18:01:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:50.097 { 00:14:50.097 "name": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:50.097 "aliases": [ 00:14:50.097 "lvs/nvme0n1p0" 00:14:50.097 ], 00:14:50.097 "product_name": "Logical Volume", 00:14:50.097 "block_size": 4096, 00:14:50.097 "num_blocks": 26476544, 00:14:50.097 "uuid": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:50.097 "assigned_rate_limits": { 00:14:50.097 "rw_ios_per_sec": 0, 00:14:50.097 "rw_mbytes_per_sec": 0, 00:14:50.097 "r_mbytes_per_sec": 0, 00:14:50.097 "w_mbytes_per_sec": 0 00:14:50.097 }, 00:14:50.097 "claimed": false, 00:14:50.097 "zoned": false, 00:14:50.097 "supported_io_types": { 00:14:50.097 "read": true, 00:14:50.097 "write": true, 00:14:50.097 "unmap": true, 00:14:50.097 "write_zeroes": true, 00:14:50.097 "flush": false, 00:14:50.097 "reset": true, 00:14:50.097 "compare": false, 00:14:50.097 "compare_and_write": false, 00:14:50.097 "abort": false, 00:14:50.097 "nvme_admin": false, 00:14:50.097 "nvme_io": false 00:14:50.097 }, 00:14:50.097 "driver_specific": { 00:14:50.097 "lvol": { 00:14:50.097 "lvol_store_uuid": "25a5a626-8e7e-4fa9-930b-224693f0b233", 00:14:50.097 "base_bdev": "nvme0n1", 00:14:50.097 "thin_provision": true, 00:14:50.097 "snapshot": false, 00:14:50.097 "clone": false, 00:14:50.097 "esnap_clone": false 00:14:50.097 } 00:14:50.097 } 00:14:50.097 } 00:14:50.097 ]' 00:14:50.097 18:01:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:50.097 18:01:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:50.097 18:01:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:50.097 18:01:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:50.097 18:01:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:50.097 18:01:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:50.097 18:01:06 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:50.097 18:01:06 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:50.097 18:01:06 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2da4baf5-89ca-447b-997f-581cc389c012 -c nvc0n1p0 --l2p_dram_limit 60 00:14:50.358 [2024-11-26 18:01:07.091121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.091167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:50.358 [2024-11-26 18:01:07.091186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:50.358 [2024-11-26 18:01:07.091201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.091292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.091306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:50.358 [2024-11-26 18:01:07.091323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:14:50.358 [2024-11-26 18:01:07.091334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.091381] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:50.358 [2024-11-26 18:01:07.091682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:50.358 [2024-11-26 18:01:07.091718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.091729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:50.358 [2024-11-26 18:01:07.091742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:14:50.358 [2024-11-26 18:01:07.091753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.091914] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3362f652-1cf1-41f7-a77e-c281b408dad2 00:14:50.358 [2024-11-26 18:01:07.093400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.093466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:50.358 [2024-11-26 18:01:07.093493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:14:50.358 [2024-11-26 18:01:07.093509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.101039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.101087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:50.358 [2024-11-26 18:01:07.101099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.456 ms 00:14:50.358 [2024-11-26 18:01:07.101117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.101226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.101245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:50.358 [2024-11-26 18:01:07.101270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:14:50.358 [2024-11-26 18:01:07.101285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.101369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.101392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:50.358 [2024-11-26 18:01:07.101403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:14:50.358 [2024-11-26 18:01:07.101418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.101493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:50.358 [2024-11-26 18:01:07.103310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.103334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:50.358 [2024-11-26 18:01:07.103352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:14:50.358 [2024-11-26 18:01:07.103362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.103418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.103435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:50.358 [2024-11-26 18:01:07.103464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:50.358 [2024-11-26 18:01:07.103475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.103507] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:50.358 [2024-11-26 18:01:07.103636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:50.358 [2024-11-26 18:01:07.103659] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:50.358 [2024-11-26 18:01:07.103672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:50.358 [2024-11-26 18:01:07.103693] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:50.358 [2024-11-26 18:01:07.103706] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:50.358 [2024-11-26 18:01:07.103732] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:50.358 [2024-11-26 18:01:07.103753] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:50.358 [2024-11-26 18:01:07.103766] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:50.358 [2024-11-26 18:01:07.103776] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:50.358 [2024-11-26 18:01:07.103806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.103818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:50.358 [2024-11-26 18:01:07.103833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:14:50.358 [2024-11-26 18:01:07.103848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.103943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.358 [2024-11-26 18:01:07.103954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:50.358 [2024-11-26 18:01:07.103971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:14:50.358 [2024-11-26 18:01:07.103981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.358 [2024-11-26 18:01:07.104069] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:50.358 [2024-11-26 18:01:07.104081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:50.358 [2024-11-26 18:01:07.104094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:50.358 [2024-11-26 18:01:07.104129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:50.358 [2024-11-26 18:01:07.104162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:50.358 [2024-11-26 18:01:07.104186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:50.358 [2024-11-26 18:01:07.104196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:50.358 [2024-11-26 18:01:07.104214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:50.358 [2024-11-26 18:01:07.104224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:50.358 [2024-11-26 18:01:07.104238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:50.358 [2024-11-26 18:01:07.104247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:50.358 [2024-11-26 18:01:07.104270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:50.358 [2024-11-26 18:01:07.104284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:50.358 [2024-11-26 18:01:07.104308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:50.358 [2024-11-26 18:01:07.104317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:50.358 [2024-11-26 18:01:07.104341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:50.358 [2024-11-26 18:01:07.104380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:50.358 [2024-11-26 18:01:07.104417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:50.358 [2024-11-26 18:01:07.104466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.358 [2024-11-26 18:01:07.104491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:50.358 [2024-11-26 18:01:07.104501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:50.358 [2024-11-26 18:01:07.104514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:50.358 [2024-11-26 18:01:07.104524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:50.359 [2024-11-26 18:01:07.104539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:50.359 [2024-11-26 18:01:07.104549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:50.359 [2024-11-26 18:01:07.104563] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:50.359 [2024-11-26 18:01:07.104573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:50.359 [2024-11-26 18:01:07.104589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:50.359 [2024-11-26 18:01:07.104599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.359 [2024-11-26 18:01:07.104619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:50.359 [2024-11-26 18:01:07.104629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:50.359 [2024-11-26 18:01:07.104643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:50.359 [2024-11-26 18:01:07.104653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:50.359 [2024-11-26 18:01:07.104667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:50.359 [2024-11-26 18:01:07.104678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:50.359 [2024-11-26 18:01:07.104694] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:50.359 [2024-11-26 18:01:07.104706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:50.359 [2024-11-26 18:01:07.104723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:50.359 [2024-11-26 18:01:07.104734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:50.359 [2024-11-26 18:01:07.104746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:50.359 [2024-11-26 18:01:07.104757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:50.359 [2024-11-26 18:01:07.104770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:50.359 [2024-11-26 18:01:07.104780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:50.359 [2024-11-26 18:01:07.104792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:50.359 [2024-11-26 18:01:07.104802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:50.359 [2024-11-26 18:01:07.104817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:50.359 [2024-11-26 18:01:07.104827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:50.359 [2024-11-26 18:01:07.104839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:50.359 [2024-11-26 18:01:07.104850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:50.359 [2024-11-26 18:01:07.104862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:50.359 [2024-11-26 18:01:07.104872] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:50.359 [2024-11-26 18:01:07.104889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:50.359 [2024-11-26 18:01:07.104900] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:50.359 [2024-11-26 18:01:07.104915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:50.359 [2024-11-26 18:01:07.104925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:50.359 [2024-11-26 18:01:07.104941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:50.359 [2024-11-26 18:01:07.104952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.104968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:50.359 [2024-11-26 18:01:07.104978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:14:50.359 [2024-11-26 18:01:07.105000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.113619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.113660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:50.359 [2024-11-26 18:01:07.113673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.556 ms 00:14:50.359 [2024-11-26 18:01:07.113686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.113775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.113802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:50.359 [2024-11-26 18:01:07.113814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:14:50.359 [2024-11-26 18:01:07.113838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.126076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.126117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:50.359 [2024-11-26 18:01:07.126138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.172 ms 00:14:50.359 [2024-11-26 18:01:07.126154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.126194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.126210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:50.359 [2024-11-26 18:01:07.126222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:50.359 [2024-11-26 18:01:07.126242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.126733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.126765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:50.359 [2024-11-26 18:01:07.126777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:14:50.359 [2024-11-26 18:01:07.126792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.126921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.126939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:50.359 [2024-11-26 18:01:07.126950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:14:50.359 [2024-11-26 18:01:07.126962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.148795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.148842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:50.359 [2024-11-26 18:01:07.148860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.834 ms 00:14:50.359 [2024-11-26 18:01:07.148880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.159292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:50.359 [2024-11-26 18:01:07.176363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.176403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:50.359 [2024-11-26 18:01:07.176423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.374 ms 00:14:50.359 [2024-11-26 18:01:07.176434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.258151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.359 [2024-11-26 18:01:07.258210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:50.359 [2024-11-26 18:01:07.258232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.752 ms 00:14:50.359 [2024-11-26 18:01:07.258244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.359 [2024-11-26 18:01:07.258295] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:50.359 [2024-11-26 18:01:07.258313] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:55.639 [2024-11-26 18:01:11.784260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.784336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:55.639 [2024-11-26 18:01:11.784359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4533.296 ms 00:14:55.639 [2024-11-26 18:01:11.784372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.784592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.784618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:55.639 [2024-11-26 18:01:11.784633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:14:55.639 [2024-11-26 18:01:11.784643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.788577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.788608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:55.639 [2024-11-26 18:01:11.788628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:14:55.639 [2024-11-26 18:01:11.788639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.791482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.791515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:55.639 [2024-11-26 18:01:11.791531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:14:55.639 [2024-11-26 18:01:11.791541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.791713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.791728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:55.639 [2024-11-26 18:01:11.791742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:14:55.639 [2024-11-26 18:01:11.791752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.824019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.824059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:55.639 [2024-11-26 18:01:11.824078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.278 ms 00:14:55.639 [2024-11-26 18:01:11.824089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.828813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.828848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:55.639 [2024-11-26 18:01:11.828872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:14:55.639 [2024-11-26 18:01:11.828883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.833638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.833673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:55.639 [2024-11-26 18:01:11.833690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.712 ms 00:14:55.639 [2024-11-26 18:01:11.833701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.837584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.837617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:55.639 [2024-11-26 18:01:11.837632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:14:55.639 [2024-11-26 18:01:11.837642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.837696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.837720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:55.639 [2024-11-26 18:01:11.837746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:55.639 [2024-11-26 18:01:11.837767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.837859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.639 [2024-11-26 18:01:11.837873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:55.639 [2024-11-26 18:01:11.837891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:14:55.639 [2024-11-26 18:01:11.837901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.639 [2024-11-26 18:01:11.839143] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4755.263 ms, result 0 00:14:55.639 { 00:14:55.639 "name": "ftl0", 00:14:55.639 "uuid": "3362f652-1cf1-41f7-a77e-c281b408dad2" 00:14:55.640 } 00:14:55.640 18:01:11 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:55.640 18:01:11 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:55.640 18:01:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.640 18:01:11 -- common/autotest_common.sh@899 -- # local i 00:14:55.640 18:01:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.640 18:01:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.640 18:01:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.640 18:01:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:55.640 [ 00:14:55.640 { 00:14:55.640 "name": "ftl0", 00:14:55.640 "aliases": [ 00:14:55.640 "3362f652-1cf1-41f7-a77e-c281b408dad2" 00:14:55.640 ], 00:14:55.640 "product_name": "FTL disk", 00:14:55.640 "block_size": 4096, 00:14:55.640 "num_blocks": 20971520, 00:14:55.640 "uuid": "3362f652-1cf1-41f7-a77e-c281b408dad2", 00:14:55.640 "assigned_rate_limits": { 00:14:55.640 "rw_ios_per_sec": 0, 00:14:55.640 "rw_mbytes_per_sec": 0, 00:14:55.640 "r_mbytes_per_sec": 0, 00:14:55.640 "w_mbytes_per_sec": 0 00:14:55.640 }, 00:14:55.640 "claimed": false, 00:14:55.640 "zoned": false, 00:14:55.640 "supported_io_types": { 00:14:55.640 "read": true, 00:14:55.640 "write": true, 00:14:55.640 "unmap": true, 00:14:55.640 "write_zeroes": true, 00:14:55.640 "flush": true, 00:14:55.640 "reset": false, 00:14:55.640 "compare": false, 00:14:55.640 "compare_and_write": false, 00:14:55.640 "abort": false, 00:14:55.640 "nvme_admin": false, 00:14:55.640 "nvme_io": false 00:14:55.640 }, 00:14:55.640 "driver_specific": { 00:14:55.640 "ftl": { 00:14:55.640 "base_bdev": "2da4baf5-89ca-447b-997f-581cc389c012", 00:14:55.640 "cache": "nvc0n1p0" 00:14:55.640 } 00:14:55.640 } 00:14:55.640 } 00:14:55.640 ] 00:14:55.640 18:01:12 -- common/autotest_common.sh@905 -- # return 0 00:14:55.640 18:01:12 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:55.640 18:01:12 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:55.640 18:01:12 -- ftl/fio.sh@70 -- # echo ']}' 00:14:55.640 18:01:12 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:55.901 [2024-11-26 18:01:12.634450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.634526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:55.901 [2024-11-26 18:01:12.634549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:55.901 [2024-11-26 18:01:12.634581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.634619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:55.901 [2024-11-26 18:01:12.635316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.635338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:55.901 [2024-11-26 18:01:12.635355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:14:55.901 [2024-11-26 18:01:12.635366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.635843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.635868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:55.901 [2024-11-26 18:01:12.635885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:14:55.901 [2024-11-26 18:01:12.635900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.638416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.638438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:55.901 [2024-11-26 18:01:12.638461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:14:55.901 [2024-11-26 18:01:12.638472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.643465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.643496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:55.901 [2024-11-26 18:01:12.643532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.956 ms 00:14:55.901 [2024-11-26 18:01:12.643548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.645164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.645197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:55.901 [2024-11-26 18:01:12.645215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:14:55.901 [2024-11-26 18:01:12.645225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.652427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.652473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:55.901 [2024-11-26 18:01:12.652493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.171 ms 00:14:55.901 [2024-11-26 18:01:12.652504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.652676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.652691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:55.901 [2024-11-26 18:01:12.652713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:14:55.901 [2024-11-26 18:01:12.652723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.654591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.654620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:55.901 [2024-11-26 18:01:12.654637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:14:55.901 [2024-11-26 18:01:12.654646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.656230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.656260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:55.901 [2024-11-26 18:01:12.656277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:14:55.901 [2024-11-26 18:01:12.656287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.657479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.657509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:55.901 [2024-11-26 18:01:12.657526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:14:55.901 [2024-11-26 18:01:12.657536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.658748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.901 [2024-11-26 18:01:12.658780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:55.901 [2024-11-26 18:01:12.658796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:14:55.901 [2024-11-26 18:01:12.658806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.901 [2024-11-26 18:01:12.658847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:55.901 [2024-11-26 18:01:12.658864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.658987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:55.901 [2024-11-26 18:01:12.659130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.659998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:55.902 [2024-11-26 18:01:12.660166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:55.902 [2024-11-26 18:01:12.660181] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3362f652-1cf1-41f7-a77e-c281b408dad2 00:14:55.902 [2024-11-26 18:01:12.660193] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:55.902 [2024-11-26 18:01:12.660207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:55.902 [2024-11-26 18:01:12.660218] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:55.902 [2024-11-26 18:01:12.660233] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:55.902 [2024-11-26 18:01:12.660243] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:55.902 [2024-11-26 18:01:12.660270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:55.902 [2024-11-26 18:01:12.660280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:55.902 [2024-11-26 18:01:12.660295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:55.903 [2024-11-26 18:01:12.660304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:55.903 [2024-11-26 18:01:12.660320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.903 [2024-11-26 18:01:12.660331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:55.903 [2024-11-26 18:01:12.660347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:14:55.903 [2024-11-26 18:01:12.660357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.662216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.903 [2024-11-26 18:01:12.662239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:55.903 [2024-11-26 18:01:12.662256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:14:55.903 [2024-11-26 18:01:12.662271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.662371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:55.903 [2024-11-26 18:01:12.662386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:55.903 [2024-11-26 18:01:12.662401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:14:55.903 [2024-11-26 18:01:12.662412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.669527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.669560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:55.903 [2024-11-26 18:01:12.669583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.669594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.669663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.669674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:55.903 [2024-11-26 18:01:12.669689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.669699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.669802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.669816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:55.903 [2024-11-26 18:01:12.669832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.669848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.669879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.669890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:55.903 [2024-11-26 18:01:12.669906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.669916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.684014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.684061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:55.903 [2024-11-26 18:01:12.684081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.684097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.688726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.688759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:55.903 [2024-11-26 18:01:12.688777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.688801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.688881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.688913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:55.903 [2024-11-26 18:01:12.688926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.688937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.689021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:55.903 [2024-11-26 18:01:12.689046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.689056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.689169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:55.903 [2024-11-26 18:01:12.689182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.689192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.689276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:55.903 [2024-11-26 18:01:12.689289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.689299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.689360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:55.903 [2024-11-26 18:01:12.689373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.689382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:55.903 [2024-11-26 18:01:12.689467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:55.903 [2024-11-26 18:01:12.689483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:55.903 [2024-11-26 18:01:12.689493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:55.903 [2024-11-26 18:01:12.689670] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.268 ms, result 0 00:14:55.903 true 00:14:55.903 18:01:12 -- ftl/fio.sh@75 -- # killprocess 82159 00:14:55.903 18:01:12 -- common/autotest_common.sh@936 -- # '[' -z 82159 ']' 00:14:55.903 18:01:12 -- common/autotest_common.sh@940 -- # kill -0 82159 00:14:55.903 18:01:12 -- common/autotest_common.sh@941 -- # uname 00:14:55.903 18:01:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.903 18:01:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82159 00:14:55.903 18:01:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.903 18:01:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.903 18:01:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82159' 00:14:55.903 killing process with pid 82159 00:14:55.903 18:01:12 -- common/autotest_common.sh@955 -- # kill 82159 00:14:55.903 18:01:12 -- common/autotest_common.sh@960 -- # wait 82159 00:14:59.189 18:01:15 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:59.189 18:01:15 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:59.189 18:01:15 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:59.189 18:01:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.189 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.189 18:01:15 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:59.189 18:01:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:59.189 18:01:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:59.189 18:01:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.189 18:01:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:59.189 18:01:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.189 18:01:15 -- common/autotest_common.sh@1330 -- # shift 00:14:59.189 18:01:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:59.189 18:01:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.189 18:01:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:59.189 18:01:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:59.189 18:01:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.189 18:01:15 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.189 18:01:15 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.189 18:01:15 -- common/autotest_common.sh@1336 -- # break 00:14:59.189 18:01:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.189 18:01:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:59.189 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:59.189 fio-3.35 00:14:59.189 Starting 1 thread 00:15:04.458 00:15:04.458 test: (groupid=0, jobs=1): err= 0: pid=82373: Tue Nov 26 18:01:20 2024 00:15:04.458 read: IOPS=957, BW=63.6MiB/s (66.7MB/s)(255MiB/4003msec) 00:15:04.458 slat (nsec): min=4317, max=21880, avg=5877.69, stdev=1869.23 00:15:04.458 clat (usec): min=280, max=997, avg=469.05, stdev=54.96 00:15:04.458 lat (usec): min=286, max=1005, avg=474.93, stdev=55.25 00:15:04.458 clat percentiles (usec): 00:15:04.458 | 1.00th=[ 355], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 433], 00:15:04.458 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 461], 60.00th=[ 482], 00:15:04.458 | 70.00th=[ 515], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 537], 00:15:04.458 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 766], 99.95th=[ 832], 00:15:04.458 | 99.99th=[ 996] 00:15:04.458 write: IOPS=964, BW=64.0MiB/s (67.2MB/s)(256MiB/3998msec); 0 zone resets 00:15:04.458 slat (nsec): min=15663, max=96910, avg=18920.93, stdev=4486.84 00:15:04.458 clat (usec): min=352, max=1040, avg=536.59, stdev=70.14 00:15:04.458 lat (usec): min=369, max=1101, avg=555.52, stdev=70.77 00:15:04.458 clat percentiles (usec): 00:15:04.458 | 1.00th=[ 404], 5.00th=[ 429], 10.00th=[ 465], 20.00th=[ 478], 00:15:04.458 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:15:04.458 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 603], 95.00th=[ 619], 00:15:04.458 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 971], 99.95th=[ 1037], 00:15:04.458 | 99.99th=[ 1037] 00:15:04.458 bw ( KiB/s): min=63512, max=68136, per=100.00%, avg=65940.57, stdev=1758.26, samples=7 00:15:04.458 iops : min= 934, max= 1002, avg=969.71, stdev=25.86, samples=7 00:15:04.458 lat (usec) : 500=46.98%, 750=52.09%, 1000=0.91% 00:15:04.458 lat (msec) : 2=0.03% 00:15:04.458 cpu : usr=99.40%, sys=0.07%, ctx=5, majf=0, minf=1326 00:15:04.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:04.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.458 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:04.458 00:15:04.458 Run status group 0 (all jobs): 00:15:04.458 READ: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=255MiB (267MB), run=4003-4003msec 00:15:04.458 WRITE: bw=64.0MiB/s (67.2MB/s), 64.0MiB/s-64.0MiB/s (67.2MB/s-67.2MB/s), io=256MiB (269MB), run=3998-3998msec 00:15:04.717 ----------------------------------------------------- 00:15:04.717 Suppressions used: 00:15:04.717 count bytes template 00:15:04.717 1 5 /usr/src/fio/parse.c 00:15:04.717 1 8 libtcmalloc_minimal.so 00:15:04.717 1 904 libcrypto.so 00:15:04.717 ----------------------------------------------------- 00:15:04.717 00:15:04.717 18:01:21 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:04.717 18:01:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.717 18:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.717 18:01:21 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:04.717 18:01:21 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:04.717 18:01:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.717 18:01:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.717 18:01:21 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:04.717 18:01:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:04.717 18:01:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:04.717 18:01:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.717 18:01:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:04.717 18:01:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.717 18:01:21 -- common/autotest_common.sh@1330 -- # shift 00:15:04.717 18:01:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:04.717 18:01:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.717 18:01:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.717 18:01:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:04.717 18:01:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:04.717 18:01:21 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.717 18:01:21 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.717 18:01:21 -- common/autotest_common.sh@1336 -- # break 00:15:04.717 18:01:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.717 18:01:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:05.080 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:05.080 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:05.080 fio-3.35 00:15:05.080 Starting 2 threads 00:15:31.642 00:15:31.642 first_half: (groupid=0, jobs=1): err= 0: pid=82459: Tue Nov 26 18:01:46 2024 00:15:31.642 read: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(255MiB/23806msec) 00:15:31.642 slat (nsec): min=3428, max=48349, avg=5695.11, stdev=1943.73 00:15:31.642 clat (usec): min=861, max=281697, avg=37168.17, stdev=19640.14 00:15:31.642 lat (usec): min=867, max=281702, avg=37173.86, stdev=19640.37 00:15:31.642 clat percentiles (msec): 00:15:31.642 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:15:31.642 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:15:31.642 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 55], 00:15:31.642 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 203], 99.95th=[ 241], 00:15:31.642 | 99.99th=[ 275] 00:15:31.642 write: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(256MiB/19894msec); 0 zone resets 00:15:31.642 slat (usec): min=4, max=2192, avg= 7.53, stdev=13.74 00:15:31.642 clat (usec): min=527, max=90264, avg=9462.55, stdev=15564.80 00:15:31.642 lat (usec): min=534, max=90269, avg=9470.08, stdev=15564.90 00:15:31.642 clat percentiles (usec): 00:15:31.642 | 1.00th=[ 963], 5.00th=[ 1205], 10.00th=[ 1418], 20.00th=[ 1827], 00:15:31.642 | 30.00th=[ 3032], 40.00th=[ 4621], 50.00th=[ 5473], 60.00th=[ 6325], 00:15:31.642 | 70.00th=[ 7373], 80.00th=[10290], 90.00th=[12911], 95.00th=[35914], 00:15:31.642 | 99.00th=[80217], 99.50th=[82314], 99.90th=[85459], 99.95th=[87557], 00:15:31.642 | 99.99th=[89654] 00:15:31.642 bw ( KiB/s): min= 272, max=48688, per=88.64%, avg=22795.13, stdev=14253.09, samples=23 00:15:31.642 iops : min= 68, max=12172, avg=5698.78, stdev=3563.27, samples=23 00:15:31.642 lat (usec) : 750=0.08%, 1000=0.61% 00:15:31.642 lat (msec) : 2=10.86%, 4=7.09%, 10=21.26%, 20=7.12%, 50=47.47% 00:15:31.642 lat (msec) : 100=4.29%, 250=1.18%, 500=0.02% 00:15:31.642 cpu : usr=99.27%, sys=0.26%, ctx=100, majf=0, minf=5581 00:15:31.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:31.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.642 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.642 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.642 second_half: (groupid=0, jobs=1): err= 0: pid=82460: Tue Nov 26 18:01:46 2024 00:15:31.642 read: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(255MiB/23952msec) 00:15:31.642 slat (nsec): min=3351, max=37190, avg=5663.48, stdev=1916.15 00:15:31.642 clat (usec): min=861, max=286170, avg=36593.60, stdev=21235.03 00:15:31.642 lat (usec): min=868, max=286177, avg=36599.26, stdev=21235.31 00:15:31.642 clat percentiles (msec): 00:15:31.642 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:15:31.642 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:15:31.642 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 52], 00:15:31.642 | 99.00th=[ 157], 99.50th=[ 184], 99.90th=[ 205], 99.95th=[ 228], 00:15:31.642 | 99.99th=[ 279] 00:15:31.642 write: IOPS=3214, BW=12.6MiB/s (13.2MB/s)(256MiB/20388msec); 0 zone resets 00:15:31.642 slat (usec): min=4, max=4734, avg= 7.60, stdev=20.52 00:15:31.642 clat (usec): min=432, max=89900, avg=10294.31, stdev=16614.39 00:15:31.642 lat (usec): min=442, max=89906, avg=10301.91, stdev=16614.53 00:15:31.642 clat percentiles (usec): 00:15:31.642 | 1.00th=[ 938], 5.00th=[ 1172], 10.00th=[ 1369], 20.00th=[ 1729], 00:15:31.642 | 30.00th=[ 2769], 40.00th=[ 4359], 50.00th=[ 5342], 60.00th=[ 6259], 00:15:31.642 | 70.00th=[ 7504], 80.00th=[10814], 90.00th=[25822], 95.00th=[49546], 00:15:31.642 | 99.00th=[80217], 99.50th=[83362], 99.90th=[86508], 99.95th=[88605], 00:15:31.642 | 99.99th=[89654] 00:15:31.642 bw ( KiB/s): min= 328, max=49928, per=88.65%, avg=22798.61, stdev=15968.62, samples=23 00:15:31.642 iops : min= 82, max=12482, avg=5699.65, stdev=3992.16, samples=23 00:15:31.642 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.76% 00:15:31.642 lat (msec) : 2=11.66%, 4=6.94%, 10=20.60%, 20=6.42%, 50=48.34% 00:15:31.642 lat (msec) : 100=3.84%, 250=1.34%, 500=0.01% 00:15:31.642 cpu : usr=99.30%, sys=0.22%, ctx=38, majf=0, minf=5563 00:15:31.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:31.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.642 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.642 issued rwts: total=65289,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.642 00:15:31.642 Run status group 0 (all jobs): 00:15:31.642 READ: bw=21.3MiB/s (22.3MB/s), 10.6MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=510MiB (535MB), run=23806-23952msec 00:15:31.642 WRITE: bw=25.1MiB/s (26.3MB/s), 12.6MiB/s-12.9MiB/s (13.2MB/s-13.5MB/s), io=512MiB (537MB), run=19894-20388msec 00:15:31.642 ----------------------------------------------------- 00:15:31.642 Suppressions used: 00:15:31.642 count bytes template 00:15:31.642 2 10 /usr/src/fio/parse.c 00:15:31.642 4 384 /usr/src/fio/iolog.c 00:15:31.642 1 8 libtcmalloc_minimal.so 00:15:31.642 1 904 libcrypto.so 00:15:31.642 ----------------------------------------------------- 00:15:31.642 00:15:31.642 18:01:47 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:31.642 18:01:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.642 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.643 18:01:47 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:31.643 18:01:47 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:31.643 18:01:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.643 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.643 18:01:47 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:31.643 18:01:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:31.643 18:01:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:31.643 18:01:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.643 18:01:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:31.643 18:01:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.643 18:01:47 -- common/autotest_common.sh@1330 -- # shift 00:15:31.643 18:01:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:31.643 18:01:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.643 18:01:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.643 18:01:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:31.643 18:01:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:31.643 18:01:47 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.643 18:01:47 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.643 18:01:47 -- common/autotest_common.sh@1336 -- # break 00:15:31.643 18:01:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.643 18:01:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:31.643 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:31.643 fio-3.35 00:15:31.643 Starting 1 thread 00:15:46.553 00:15:46.553 test: (groupid=0, jobs=1): err= 0: pid=82769: Tue Nov 26 18:02:01 2024 00:15:46.553 read: IOPS=7788, BW=30.4MiB/s (31.9MB/s)(255MiB/8372msec) 00:15:46.553 slat (nsec): min=3336, max=44308, avg=5237.31, stdev=1892.34 00:15:46.553 clat (usec): min=585, max=32177, avg=16427.11, stdev=1254.97 00:15:46.553 lat (usec): min=588, max=32183, avg=16432.35, stdev=1255.13 00:15:46.553 clat percentiles (usec): 00:15:46.553 | 1.00th=[15270], 5.00th=[15533], 10.00th=[15664], 20.00th=[15795], 00:15:46.553 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16188], 60.00th=[16319], 00:15:46.553 | 70.00th=[16450], 80.00th=[16581], 90.00th=[16909], 95.00th=[17957], 00:15:46.553 | 99.00th=[21103], 99.50th=[22938], 99.90th=[28705], 99.95th=[29754], 00:15:46.553 | 99.99th=[31589] 00:15:46.553 write: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(256MiB/4734msec); 0 zone resets 00:15:46.553 slat (usec): min=4, max=553, avg= 7.40, stdev= 5.43 00:15:46.553 clat (usec): min=551, max=63233, avg=9199.40, stdev=11526.04 00:15:46.553 lat (usec): min=559, max=63240, avg=9206.80, stdev=11526.06 00:15:46.553 clat percentiles (usec): 00:15:46.553 | 1.00th=[ 914], 5.00th=[ 1106], 10.00th=[ 1237], 20.00th=[ 1434], 00:15:46.553 | 30.00th=[ 1614], 40.00th=[ 2057], 50.00th=[ 5866], 60.00th=[ 6783], 00:15:46.553 | 70.00th=[ 7832], 80.00th=[ 9765], 90.00th=[32900], 95.00th=[34866], 00:15:46.553 | 99.00th=[41681], 99.50th=[49546], 99.90th=[57410], 99.95th=[59507], 00:15:46.553 | 99.99th=[62129] 00:15:46.553 bw ( KiB/s): min=20640, max=79224, per=94.66%, avg=52415.00, stdev=15474.97, samples=10 00:15:46.553 iops : min= 5160, max=19806, avg=13103.70, stdev=3868.68, samples=10 00:15:46.553 lat (usec) : 750=0.04%, 1000=1.21% 00:15:46.553 lat (msec) : 2=18.60%, 4=1.29%, 10=19.60%, 20=49.71%, 50=9.32% 00:15:46.553 lat (msec) : 100=0.23% 00:15:46.553 cpu : usr=99.05%, sys=0.40%, ctx=40, majf=0, minf=5577 00:15:46.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:46.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.553 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.553 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.553 00:15:46.553 Run status group 0 (all jobs): 00:15:46.553 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=255MiB (267MB), run=8372-8372msec 00:15:46.553 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=256MiB (268MB), run=4734-4734msec 00:15:46.553 ----------------------------------------------------- 00:15:46.553 Suppressions used: 00:15:46.553 count bytes template 00:15:46.553 1 5 /usr/src/fio/parse.c 00:15:46.553 2 192 /usr/src/fio/iolog.c 00:15:46.553 1 8 libtcmalloc_minimal.so 00:15:46.553 1 904 libcrypto.so 00:15:46.553 ----------------------------------------------------- 00:15:46.553 00:15:46.553 18:02:02 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:46.553 18:02:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.553 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:46.553 18:02:02 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:46.553 Remove shared memory files 00:15:46.553 18:02:02 -- ftl/fio.sh@85 -- # remove_shm 00:15:46.553 18:02:02 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:46.553 18:02:02 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:46.553 18:02:02 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:46.553 18:02:02 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68664 /dev/shm/spdk_tgt_trace.pid81102 00:15:46.554 18:02:02 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:46.554 18:02:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:46.554 ************************************ 00:15:46.554 END TEST ftl_fio_basic 00:15:46.554 ************************************ 00:15:46.554 00:15:46.554 real 0m59.146s 00:15:46.554 user 2m12.660s 00:15:46.554 sys 0m3.756s 00:15:46.554 18:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:46.554 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:46.554 18:02:02 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:46.554 18:02:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:46.554 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:46.554 ************************************ 00:15:46.554 START TEST ftl_bdevperf 00:15:46.554 ************************************ 00:15:46.554 18:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:46.554 * Looking for test storage... 00:15:46.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.554 18:02:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:46.554 18:02:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:46.554 18:02:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:46.554 18:02:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:46.554 18:02:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:46.554 18:02:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:46.554 18:02:02 -- scripts/common.sh@335 -- # IFS=.-: 00:15:46.554 18:02:02 -- scripts/common.sh@335 -- # read -ra ver1 00:15:46.554 18:02:02 -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.554 18:02:02 -- scripts/common.sh@336 -- # read -ra ver2 00:15:46.554 18:02:02 -- scripts/common.sh@337 -- # local 'op=<' 00:15:46.554 18:02:02 -- scripts/common.sh@339 -- # ver1_l=2 00:15:46.554 18:02:02 -- scripts/common.sh@340 -- # ver2_l=1 00:15:46.554 18:02:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:46.554 18:02:02 -- scripts/common.sh@343 -- # case "$op" in 00:15:46.554 18:02:02 -- scripts/common.sh@344 -- # : 1 00:15:46.554 18:02:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:46.554 18:02:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.554 18:02:02 -- scripts/common.sh@364 -- # decimal 1 00:15:46.554 18:02:02 -- scripts/common.sh@352 -- # local d=1 00:15:46.554 18:02:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.554 18:02:02 -- scripts/common.sh@354 -- # echo 1 00:15:46.554 18:02:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:46.554 18:02:02 -- scripts/common.sh@365 -- # decimal 2 00:15:46.554 18:02:02 -- scripts/common.sh@352 -- # local d=2 00:15:46.554 18:02:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.554 18:02:02 -- scripts/common.sh@354 -- # echo 2 00:15:46.554 18:02:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:46.554 18:02:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:46.554 18:02:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:46.554 18:02:02 -- scripts/common.sh@367 -- # return 0 00:15:46.554 18:02:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:46.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.554 --rc genhtml_branch_coverage=1 00:15:46.554 --rc genhtml_function_coverage=1 00:15:46.554 --rc genhtml_legend=1 00:15:46.554 --rc geninfo_all_blocks=1 00:15:46.554 --rc geninfo_unexecuted_blocks=1 00:15:46.554 00:15:46.554 ' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:46.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.554 --rc genhtml_branch_coverage=1 00:15:46.554 --rc genhtml_function_coverage=1 00:15:46.554 --rc genhtml_legend=1 00:15:46.554 --rc geninfo_all_blocks=1 00:15:46.554 --rc geninfo_unexecuted_blocks=1 00:15:46.554 00:15:46.554 ' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:46.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.554 --rc genhtml_branch_coverage=1 00:15:46.554 --rc genhtml_function_coverage=1 00:15:46.554 --rc genhtml_legend=1 00:15:46.554 --rc geninfo_all_blocks=1 00:15:46.554 --rc geninfo_unexecuted_blocks=1 00:15:46.554 00:15:46.554 ' 00:15:46.554 18:02:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:46.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.554 --rc genhtml_branch_coverage=1 00:15:46.554 --rc genhtml_function_coverage=1 00:15:46.554 --rc genhtml_legend=1 00:15:46.554 --rc geninfo_all_blocks=1 00:15:46.554 --rc geninfo_unexecuted_blocks=1 00:15:46.554 00:15:46.554 ' 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:46.554 18:02:02 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:46.554 18:02:02 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.554 18:02:02 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.554 18:02:02 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:46.554 18:02:02 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:46.554 18:02:02 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.554 18:02:02 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:46.554 18:02:02 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:46.554 18:02:02 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.554 18:02:02 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.554 18:02:02 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:46.554 18:02:02 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:46.554 18:02:02 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.554 18:02:02 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.554 18:02:02 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:46.554 18:02:02 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:46.554 18:02:02 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.554 18:02:02 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.554 18:02:02 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:46.554 18:02:02 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:46.554 18:02:02 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.554 18:02:02 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.554 18:02:02 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.554 18:02:02 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.554 18:02:02 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:46.554 18:02:02 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:46.554 18:02:02 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.554 18:02:02 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:46.554 18:02:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.554 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=82997 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:46.554 18:02:02 -- ftl/bdevperf.sh@22 -- # waitforlisten 82997 00:15:46.554 18:02:02 -- common/autotest_common.sh@829 -- # '[' -z 82997 ']' 00:15:46.554 18:02:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.554 18:02:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.554 18:02:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.554 18:02:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.554 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:15:46.554 [2024-11-26 18:02:02.873087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:46.554 [2024-11-26 18:02:02.873216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82997 ] 00:15:46.554 [2024-11-26 18:02:03.008061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.554 [2024-11-26 18:02:03.050719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.859 18:02:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.859 18:02:03 -- common/autotest_common.sh@862 -- # return 0 00:15:46.859 18:02:03 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:46.859 18:02:03 -- ftl/common.sh@54 -- # local name=nvme0 00:15:46.859 18:02:03 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:46.859 18:02:03 -- ftl/common.sh@56 -- # local size=103424 00:15:46.860 18:02:03 -- ftl/common.sh@59 -- # local base_bdev 00:15:46.860 18:02:03 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:47.119 18:02:04 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:47.119 18:02:04 -- ftl/common.sh@62 -- # local base_size 00:15:47.119 18:02:04 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:47.119 18:02:04 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:47.119 18:02:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:47.119 18:02:04 -- common/autotest_common.sh@1369 -- # local bs 00:15:47.119 18:02:04 -- common/autotest_common.sh@1370 -- # local nb 00:15:47.119 18:02:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:47.378 18:02:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:47.378 { 00:15:47.378 "name": "nvme0n1", 00:15:47.378 "aliases": [ 00:15:47.378 "aeffd59a-ffa5-46f0-8bf2-f17ad70fdab3" 00:15:47.378 ], 00:15:47.378 "product_name": "NVMe disk", 00:15:47.378 "block_size": 4096, 00:15:47.378 "num_blocks": 1310720, 00:15:47.378 "uuid": "aeffd59a-ffa5-46f0-8bf2-f17ad70fdab3", 00:15:47.378 "assigned_rate_limits": { 00:15:47.378 "rw_ios_per_sec": 0, 00:15:47.378 "rw_mbytes_per_sec": 0, 00:15:47.378 "r_mbytes_per_sec": 0, 00:15:47.378 "w_mbytes_per_sec": 0 00:15:47.378 }, 00:15:47.378 "claimed": true, 00:15:47.378 "claim_type": "read_many_write_one", 00:15:47.378 "zoned": false, 00:15:47.378 "supported_io_types": { 00:15:47.378 "read": true, 00:15:47.378 "write": true, 00:15:47.378 "unmap": true, 00:15:47.378 "write_zeroes": true, 00:15:47.378 "flush": true, 00:15:47.378 "reset": true, 00:15:47.378 "compare": true, 00:15:47.378 "compare_and_write": false, 00:15:47.378 "abort": true, 00:15:47.378 "nvme_admin": true, 00:15:47.378 "nvme_io": true 00:15:47.378 }, 00:15:47.378 "driver_specific": { 00:15:47.378 "nvme": [ 00:15:47.378 { 00:15:47.378 "pci_address": "0000:00:07.0", 00:15:47.378 "trid": { 00:15:47.378 "trtype": "PCIe", 00:15:47.378 "traddr": "0000:00:07.0" 00:15:47.378 }, 00:15:47.378 "ctrlr_data": { 00:15:47.378 "cntlid": 0, 00:15:47.378 "vendor_id": "0x1b36", 00:15:47.378 "model_number": "QEMU NVMe Ctrl", 00:15:47.378 "serial_number": "12341", 00:15:47.378 "firmware_revision": "8.0.0", 00:15:47.378 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:47.378 "oacs": { 00:15:47.378 "security": 0, 00:15:47.378 "format": 1, 00:15:47.378 "firmware": 0, 00:15:47.378 "ns_manage": 1 00:15:47.378 }, 00:15:47.378 "multi_ctrlr": false, 00:15:47.378 "ana_reporting": false 00:15:47.378 }, 00:15:47.378 "vs": { 00:15:47.378 "nvme_version": "1.4" 00:15:47.378 }, 00:15:47.378 "ns_data": { 00:15:47.378 "id": 1, 00:15:47.378 "can_share": false 00:15:47.378 } 00:15:47.378 } 00:15:47.378 ], 00:15:47.378 "mp_policy": "active_passive" 00:15:47.378 } 00:15:47.378 } 00:15:47.378 ]' 00:15:47.378 18:02:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:47.378 18:02:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:47.378 18:02:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:47.637 18:02:04 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:47.637 18:02:04 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:47.637 18:02:04 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:47.637 18:02:04 -- ftl/common.sh@63 -- # base_size=5120 00:15:47.637 18:02:04 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:47.637 18:02:04 -- ftl/common.sh@67 -- # clear_lvols 00:15:47.637 18:02:04 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:47.637 18:02:04 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:47.637 18:02:04 -- ftl/common.sh@28 -- # stores=25a5a626-8e7e-4fa9-930b-224693f0b233 00:15:47.637 18:02:04 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:47.637 18:02:04 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25a5a626-8e7e-4fa9-930b-224693f0b233 00:15:47.897 18:02:04 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:48.156 18:02:04 -- ftl/common.sh@68 -- # lvs=030837ec-d3a0-4634-b12d-f9ae321789b0 00:15:48.156 18:02:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 030837ec-d3a0-4634-b12d-f9ae321789b0 00:15:48.415 18:02:05 -- ftl/bdevperf.sh@23 -- # split_bdev=bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- ftl/common.sh@35 -- # local name=nvc0 00:15:48.415 18:02:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:48.415 18:02:05 -- ftl/common.sh@37 -- # local base_bdev=bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- ftl/common.sh@38 -- # local cache_size= 00:15:48.415 18:02:05 -- ftl/common.sh@41 -- # get_bdev_size bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- common/autotest_common.sh@1367 -- # local bdev_name=bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:48.415 18:02:05 -- common/autotest_common.sh@1369 -- # local bs 00:15:48.415 18:02:05 -- common/autotest_common.sh@1370 -- # local nb 00:15:48.415 18:02:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.415 18:02:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:48.415 { 00:15:48.415 "name": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:48.415 "aliases": [ 00:15:48.415 "lvs/nvme0n1p0" 00:15:48.415 ], 00:15:48.415 "product_name": "Logical Volume", 00:15:48.415 "block_size": 4096, 00:15:48.415 "num_blocks": 26476544, 00:15:48.415 "uuid": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:48.415 "assigned_rate_limits": { 00:15:48.415 "rw_ios_per_sec": 0, 00:15:48.415 "rw_mbytes_per_sec": 0, 00:15:48.415 "r_mbytes_per_sec": 0, 00:15:48.415 "w_mbytes_per_sec": 0 00:15:48.415 }, 00:15:48.415 "claimed": false, 00:15:48.415 "zoned": false, 00:15:48.415 "supported_io_types": { 00:15:48.415 "read": true, 00:15:48.415 "write": true, 00:15:48.415 "unmap": true, 00:15:48.415 "write_zeroes": true, 00:15:48.415 "flush": false, 00:15:48.415 "reset": true, 00:15:48.415 "compare": false, 00:15:48.415 "compare_and_write": false, 00:15:48.415 "abort": false, 00:15:48.415 "nvme_admin": false, 00:15:48.415 "nvme_io": false 00:15:48.415 }, 00:15:48.415 "driver_specific": { 00:15:48.415 "lvol": { 00:15:48.415 "lvol_store_uuid": "030837ec-d3a0-4634-b12d-f9ae321789b0", 00:15:48.415 "base_bdev": "nvme0n1", 00:15:48.415 "thin_provision": true, 00:15:48.415 "snapshot": false, 00:15:48.415 "clone": false, 00:15:48.415 "esnap_clone": false 00:15:48.415 } 00:15:48.415 } 00:15:48.415 } 00:15:48.415 ]' 00:15:48.415 18:02:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:48.674 18:02:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:48.674 18:02:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:48.675 18:02:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:48.675 18:02:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:48.675 18:02:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:48.675 18:02:05 -- ftl/common.sh@41 -- # local base_size=5171 00:15:48.675 18:02:05 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:48.675 18:02:05 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:48.933 18:02:05 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:48.933 18:02:05 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:48.933 18:02:05 -- ftl/common.sh@48 -- # get_bdev_size bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.933 18:02:05 -- common/autotest_common.sh@1367 -- # local bdev_name=bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:48.933 18:02:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:48.933 18:02:05 -- common/autotest_common.sh@1369 -- # local bs 00:15:48.934 18:02:05 -- common/autotest_common.sh@1370 -- # local nb 00:15:48.934 18:02:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:49.192 18:02:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:49.192 { 00:15:49.192 "name": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:49.192 "aliases": [ 00:15:49.192 "lvs/nvme0n1p0" 00:15:49.192 ], 00:15:49.192 "product_name": "Logical Volume", 00:15:49.192 "block_size": 4096, 00:15:49.192 "num_blocks": 26476544, 00:15:49.192 "uuid": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:49.192 "assigned_rate_limits": { 00:15:49.192 "rw_ios_per_sec": 0, 00:15:49.192 "rw_mbytes_per_sec": 0, 00:15:49.192 "r_mbytes_per_sec": 0, 00:15:49.192 "w_mbytes_per_sec": 0 00:15:49.192 }, 00:15:49.192 "claimed": false, 00:15:49.192 "zoned": false, 00:15:49.192 "supported_io_types": { 00:15:49.192 "read": true, 00:15:49.192 "write": true, 00:15:49.192 "unmap": true, 00:15:49.192 "write_zeroes": true, 00:15:49.192 "flush": false, 00:15:49.192 "reset": true, 00:15:49.192 "compare": false, 00:15:49.192 "compare_and_write": false, 00:15:49.192 "abort": false, 00:15:49.192 "nvme_admin": false, 00:15:49.192 "nvme_io": false 00:15:49.192 }, 00:15:49.192 "driver_specific": { 00:15:49.192 "lvol": { 00:15:49.192 "lvol_store_uuid": "030837ec-d3a0-4634-b12d-f9ae321789b0", 00:15:49.192 "base_bdev": "nvme0n1", 00:15:49.192 "thin_provision": true, 00:15:49.192 "snapshot": false, 00:15:49.192 "clone": false, 00:15:49.193 "esnap_clone": false 00:15:49.193 } 00:15:49.193 } 00:15:49.193 } 00:15:49.193 ]' 00:15:49.193 18:02:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:49.193 18:02:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:49.193 18:02:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:49.193 18:02:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:49.193 18:02:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:49.193 18:02:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:49.193 18:02:06 -- ftl/common.sh@48 -- # cache_size=5171 00:15:49.193 18:02:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:49.452 18:02:06 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:49.452 18:02:06 -- ftl/bdevperf.sh@26 -- # get_bdev_size bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:49.452 18:02:06 -- common/autotest_common.sh@1367 -- # local bdev_name=bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:49.452 18:02:06 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:49.452 18:02:06 -- common/autotest_common.sh@1369 -- # local bs 00:15:49.452 18:02:06 -- common/autotest_common.sh@1370 -- # local nb 00:15:49.452 18:02:06 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf463821-5bc0-44be-8f2a-a8e237ae938d 00:15:49.710 18:02:06 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:49.710 { 00:15:49.710 "name": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:49.710 "aliases": [ 00:15:49.710 "lvs/nvme0n1p0" 00:15:49.710 ], 00:15:49.710 "product_name": "Logical Volume", 00:15:49.711 "block_size": 4096, 00:15:49.711 "num_blocks": 26476544, 00:15:49.711 "uuid": "bf463821-5bc0-44be-8f2a-a8e237ae938d", 00:15:49.711 "assigned_rate_limits": { 00:15:49.711 "rw_ios_per_sec": 0, 00:15:49.711 "rw_mbytes_per_sec": 0, 00:15:49.711 "r_mbytes_per_sec": 0, 00:15:49.711 "w_mbytes_per_sec": 0 00:15:49.711 }, 00:15:49.711 "claimed": false, 00:15:49.711 "zoned": false, 00:15:49.711 "supported_io_types": { 00:15:49.711 "read": true, 00:15:49.711 "write": true, 00:15:49.711 "unmap": true, 00:15:49.711 "write_zeroes": true, 00:15:49.711 "flush": false, 00:15:49.711 "reset": true, 00:15:49.711 "compare": false, 00:15:49.711 "compare_and_write": false, 00:15:49.711 "abort": false, 00:15:49.711 "nvme_admin": false, 00:15:49.711 "nvme_io": false 00:15:49.711 }, 00:15:49.711 "driver_specific": { 00:15:49.711 "lvol": { 00:15:49.711 "lvol_store_uuid": "030837ec-d3a0-4634-b12d-f9ae321789b0", 00:15:49.711 "base_bdev": "nvme0n1", 00:15:49.711 "thin_provision": true, 00:15:49.711 "snapshot": false, 00:15:49.711 "clone": false, 00:15:49.711 "esnap_clone": false 00:15:49.711 } 00:15:49.711 } 00:15:49.711 } 00:15:49.711 ]' 00:15:49.711 18:02:06 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:49.711 18:02:06 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:49.711 18:02:06 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:49.711 18:02:06 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:49.711 18:02:06 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:49.711 18:02:06 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:49.711 18:02:06 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:49.711 18:02:06 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bf463821-5bc0-44be-8f2a-a8e237ae938d -c nvc0n1p0 --l2p_dram_limit 20 00:15:49.970 [2024-11-26 18:02:06.698420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.698496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:49.970 [2024-11-26 18:02:06.698518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:49.970 [2024-11-26 18:02:06.698529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.698598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.698612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.970 [2024-11-26 18:02:06.698636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:49.970 [2024-11-26 18:02:06.698652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.698674] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:49.970 [2024-11-26 18:02:06.698947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:49.970 [2024-11-26 18:02:06.698969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.698980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.970 [2024-11-26 18:02:06.698995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:15:49.970 [2024-11-26 18:02:06.699008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.699084] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a79b04f8-d71c-42ba-ab7f-e7da541fa18a 00:15:49.970 [2024-11-26 18:02:06.700548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.700577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:49.970 [2024-11-26 18:02:06.700589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:49.970 [2024-11-26 18:02:06.700601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.708112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.708311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.970 [2024-11-26 18:02:06.708337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.482 ms 00:15:49.970 [2024-11-26 18:02:06.708354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.708480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.708505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.970 [2024-11-26 18:02:06.708516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:15:49.970 [2024-11-26 18:02:06.708538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.708609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.708625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:49.970 [2024-11-26 18:02:06.708635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:49.970 [2024-11-26 18:02:06.708649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.708682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:49.970 [2024-11-26 18:02:06.710557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.710590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.970 [2024-11-26 18:02:06.710605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:15:49.970 [2024-11-26 18:02:06.710619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.710658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.710669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:49.970 [2024-11-26 18:02:06.710686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:49.970 [2024-11-26 18:02:06.710699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.970 [2024-11-26 18:02:06.710720] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:49.970 [2024-11-26 18:02:06.710834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:49.970 [2024-11-26 18:02:06.710853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:49.970 [2024-11-26 18:02:06.710867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:49.970 [2024-11-26 18:02:06.710883] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:49.970 [2024-11-26 18:02:06.710896] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:49.970 [2024-11-26 18:02:06.710910] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:49.970 [2024-11-26 18:02:06.710921] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:49.970 [2024-11-26 18:02:06.710942] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:49.970 [2024-11-26 18:02:06.710953] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:49.970 [2024-11-26 18:02:06.710966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.970 [2024-11-26 18:02:06.710980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:49.970 [2024-11-26 18:02:06.710993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:15:49.970 [2024-11-26 18:02:06.711004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.711074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.711087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:49.971 [2024-11-26 18:02:06.711101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:49.971 [2024-11-26 18:02:06.711111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.711188] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:49.971 [2024-11-26 18:02:06.711202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:49.971 [2024-11-26 18:02:06.711219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:49.971 [2024-11-26 18:02:06.711230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:49.971 [2024-11-26 18:02:06.711260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:49.971 [2024-11-26 18:02:06.711283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:49.971 [2024-11-26 18:02:06.711295] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:49.971 [2024-11-26 18:02:06.711316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:49.971 [2024-11-26 18:02:06.711327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:49.971 [2024-11-26 18:02:06.711342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:49.971 [2024-11-26 18:02:06.711354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:49.971 [2024-11-26 18:02:06.711368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:49.971 [2024-11-26 18:02:06.711378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:49.971 [2024-11-26 18:02:06.711400] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:49.971 [2024-11-26 18:02:06.711411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:49.971 [2024-11-26 18:02:06.711433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:49.971 [2024-11-26 18:02:06.711443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:49.971 [2024-11-26 18:02:06.711637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:49.971 [2024-11-26 18:02:06.711689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:49.971 [2024-11-26 18:02:06.711756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:49.971 [2024-11-26 18:02:06.711789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:49.971 [2024-11-26 18:02:06.711916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:49.971 [2024-11-26 18:02:06.711957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:49.971 [2024-11-26 18:02:06.711991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:49.971 [2024-11-26 18:02:06.712021] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:49.971 [2024-11-26 18:02:06.712054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:49.971 [2024-11-26 18:02:06.712084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:49.971 [2024-11-26 18:02:06.712117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:49.971 [2024-11-26 18:02:06.712151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:49.971 [2024-11-26 18:02:06.712185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:49.971 [2024-11-26 18:02:06.712215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:49.971 [2024-11-26 18:02:06.712247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:49.971 [2024-11-26 18:02:06.712277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:49.971 [2024-11-26 18:02:06.712309] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:49.971 [2024-11-26 18:02:06.712341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:49.971 [2024-11-26 18:02:06.712376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:49.971 [2024-11-26 18:02:06.712408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:49.971 [2024-11-26 18:02:06.712445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:49.971 [2024-11-26 18:02:06.712499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:49.971 [2024-11-26 18:02:06.712537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:49.971 [2024-11-26 18:02:06.712568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:49.971 [2024-11-26 18:02:06.712604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:49.971 [2024-11-26 18:02:06.712635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:49.971 [2024-11-26 18:02:06.712748] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:49.971 [2024-11-26 18:02:06.712813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:49.971 [2024-11-26 18:02:06.712871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:49.971 [2024-11-26 18:02:06.712925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:49.971 [2024-11-26 18:02:06.712941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:49.971 [2024-11-26 18:02:06.712953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:49.971 [2024-11-26 18:02:06.712967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:49.971 [2024-11-26 18:02:06.712979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:49.971 [2024-11-26 18:02:06.712993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:49.971 [2024-11-26 18:02:06.713005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:49.971 [2024-11-26 18:02:06.713022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:49.971 [2024-11-26 18:02:06.713048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:49.971 [2024-11-26 18:02:06.713063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:49.971 [2024-11-26 18:02:06.713075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:49.971 [2024-11-26 18:02:06.713089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:49.971 [2024-11-26 18:02:06.713100] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:49.971 [2024-11-26 18:02:06.713116] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:49.971 [2024-11-26 18:02:06.713141] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:49.971 [2024-11-26 18:02:06.713156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:49.971 [2024-11-26 18:02:06.713168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:49.971 [2024-11-26 18:02:06.713183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:49.971 [2024-11-26 18:02:06.713196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.713214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:49.971 [2024-11-26 18:02:06.713226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:15:49.971 [2024-11-26 18:02:06.713240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.721903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.721945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.971 [2024-11-26 18:02:06.721959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.611 ms 00:15:49.971 [2024-11-26 18:02:06.721971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.722053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.722066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:49.971 [2024-11-26 18:02:06.722080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:49.971 [2024-11-26 18:02:06.722092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.742066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.742269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.971 [2024-11-26 18:02:06.742305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.932 ms 00:15:49.971 [2024-11-26 18:02:06.742322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.742358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.742378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:49.971 [2024-11-26 18:02:06.742400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:49.971 [2024-11-26 18:02:06.742417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.971 [2024-11-26 18:02:06.742970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.971 [2024-11-26 18:02:06.742993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:49.972 [2024-11-26 18:02:06.743019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:15:49.972 [2024-11-26 18:02:06.743043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.972 [2024-11-26 18:02:06.743171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.972 [2024-11-26 18:02:06.743189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:49.972 [2024-11-26 18:02:06.743209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:49.972 [2024-11-26 18:02:06.743223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.972 [2024-11-26 18:02:06.750924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.972 [2024-11-26 18:02:06.750970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:49.972 [2024-11-26 18:02:06.750985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.690 ms 00:15:49.972 [2024-11-26 18:02:06.751032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.972 [2024-11-26 18:02:06.759213] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:49.972 [2024-11-26 18:02:06.765360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.972 [2024-11-26 18:02:06.765518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:49.972 [2024-11-26 18:02:06.765548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.275 ms 00:15:49.972 [2024-11-26 18:02:06.765560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.972 [2024-11-26 18:02:06.838221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.972 [2024-11-26 18:02:06.838496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:49.972 [2024-11-26 18:02:06.838533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.737 ms 00:15:49.972 [2024-11-26 18:02:06.838550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.972 [2024-11-26 18:02:06.838598] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:49.972 [2024-11-26 18:02:06.838623] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:54.177 [2024-11-26 18:02:10.563457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.563538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:54.177 [2024-11-26 18:02:10.563560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3730.897 ms 00:15:54.177 [2024-11-26 18:02:10.563571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.563768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.563791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:54.177 [2024-11-26 18:02:10.563811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:15:54.177 [2024-11-26 18:02:10.563821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.567870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.567912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:54.177 [2024-11-26 18:02:10.567933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:15:54.177 [2024-11-26 18:02:10.567944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.571032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.571215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:54.177 [2024-11-26 18:02:10.571247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:15:54.177 [2024-11-26 18:02:10.571260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.571450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.571476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:54.177 [2024-11-26 18:02:10.571495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:54.177 [2024-11-26 18:02:10.571506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.602382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.602480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:54.177 [2024-11-26 18:02:10.602512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.898 ms 00:15:54.177 [2024-11-26 18:02:10.602523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.607309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.607351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:54.177 [2024-11-26 18:02:10.607370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:15:54.177 [2024-11-26 18:02:10.607384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.609599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.609628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:54.177 [2024-11-26 18:02:10.609643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:15:54.177 [2024-11-26 18:02:10.609653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.613684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.613721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:54.177 [2024-11-26 18:02:10.613738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:15:54.177 [2024-11-26 18:02:10.613748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.613792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.613804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:54.177 [2024-11-26 18:02:10.613818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:54.177 [2024-11-26 18:02:10.613829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.613917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.177 [2024-11-26 18:02:10.613930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:54.177 [2024-11-26 18:02:10.613946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:54.177 [2024-11-26 18:02:10.613957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.177 [2024-11-26 18:02:10.615194] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3922.737 ms, result 0 00:15:54.177 { 00:15:54.177 "name": "ftl0", 00:15:54.177 "uuid": "a79b04f8-d71c-42ba-ab7f-e7da541fa18a" 00:15:54.177 } 00:15:54.177 18:02:10 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:54.177 18:02:10 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:54.177 18:02:10 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:54.177 18:02:10 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:54.177 [2024-11-26 18:02:10.950035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:54.177 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:54.177 Zero copy mechanism will not be used. 00:15:54.177 Running I/O for 4 seconds... 00:15:58.367 00:15:58.367 Latency(us) 00:15:58.367 [2024-11-26T18:02:15.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.367 [2024-11-26T18:02:15.293Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:58.368 ftl0 : 4.00 1551.52 103.03 0.00 0.00 672.07 240.17 19897.68 00:15:58.368 [2024-11-26T18:02:15.294Z] =================================================================================================================== 00:15:58.368 [2024-11-26T18:02:15.294Z] Total : 1551.52 103.03 0.00 0.00 672.07 240.17 19897.68 00:15:58.368 0 00:15:58.368 [2024-11-26 18:02:14.950618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:58.368 18:02:14 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:58.368 [2024-11-26 18:02:15.049633] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:58.368 Running I/O for 4 seconds... 00:16:02.643 00:16:02.643 Latency(us) 00:16:02.643 [2024-11-26T18:02:19.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.643 [2024-11-26T18:02:19.569Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.643 ftl0 : 4.01 11476.97 44.83 0.00 0.00 11130.61 228.65 31794.17 00:16:02.643 [2024-11-26T18:02:19.569Z] =================================================================================================================== 00:16:02.643 [2024-11-26T18:02:19.569Z] Total : 11476.97 44.83 0.00 0.00 11130.61 0.00 31794.17 00:16:02.643 [2024-11-26 18:02:19.063711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:02.643 0 00:16:02.643 18:02:19 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:02.643 [2024-11-26 18:02:19.183363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:02.643 Running I/O for 4 seconds... 00:16:06.828 00:16:06.828 Latency(us) 00:16:06.828 [2024-11-26T18:02:23.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.828 [2024-11-26T18:02:23.754Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.828 Verification LBA range: start 0x0 length 0x1400000 00:16:06.828 ftl0 : 4.01 12807.27 50.03 0.00 0.00 9970.80 181.77 22845.48 00:16:06.828 [2024-11-26T18:02:23.754Z] =================================================================================================================== 00:16:06.828 [2024-11-26T18:02:23.754Z] Total : 12807.27 50.03 0.00 0.00 9970.80 0.00 22845.48 00:16:06.828 [2024-11-26 18:02:23.189066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:06.828 0 00:16:06.828 18:02:23 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:06.828 [2024-11-26 18:02:23.393726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.393784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:06.828 [2024-11-26 18:02:23.393803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.828 [2024-11-26 18:02:23.393817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.393848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:06.828 [2024-11-26 18:02:23.394520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.394540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:06.828 [2024-11-26 18:02:23.394551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:16:06.828 [2024-11-26 18:02:23.394563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.396236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.396282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:06.828 [2024-11-26 18:02:23.396295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:16:06.828 [2024-11-26 18:02:23.396308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.595897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.595983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:06.828 [2024-11-26 18:02:23.596000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 199.890 ms 00:16:06.828 [2024-11-26 18:02:23.596015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.601147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.601187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:06.828 [2024-11-26 18:02:23.601199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.080 ms 00:16:06.828 [2024-11-26 18:02:23.601216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.603184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.603387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:06.828 [2024-11-26 18:02:23.603409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.921 ms 00:16:06.828 [2024-11-26 18:02:23.603440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.608357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.608405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:06.828 [2024-11-26 18:02:23.608420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:16:06.828 [2024-11-26 18:02:23.608433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.608550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.608571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:06.828 [2024-11-26 18:02:23.608584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:06.828 [2024-11-26 18:02:23.608597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.610879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.610920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:06.828 [2024-11-26 18:02:23.610932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:16:06.828 [2024-11-26 18:02:23.610948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.612602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.612641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:06.828 [2024-11-26 18:02:23.612653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:16:06.828 [2024-11-26 18:02:23.612665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.613916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.614067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:06.828 [2024-11-26 18:02:23.614088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:16:06.828 [2024-11-26 18:02:23.614101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.615257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.828 [2024-11-26 18:02:23.615308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:06.828 [2024-11-26 18:02:23.615320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:06.828 [2024-11-26 18:02:23.615332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.828 [2024-11-26 18:02:23.615359] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:06.828 [2024-11-26 18:02:23.615384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:06.828 [2024-11-26 18:02:23.615508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.615995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:06.829 [2024-11-26 18:02:23.616698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:06.830 [2024-11-26 18:02:23.616709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:06.830 [2024-11-26 18:02:23.616731] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:06.830 [2024-11-26 18:02:23.616741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a79b04f8-d71c-42ba-ab7f-e7da541fa18a 00:16:06.830 [2024-11-26 18:02:23.616755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:06.830 [2024-11-26 18:02:23.616774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:06.830 [2024-11-26 18:02:23.616788] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:06.830 [2024-11-26 18:02:23.616798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:06.830 [2024-11-26 18:02:23.616812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:06.830 [2024-11-26 18:02:23.616823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:06.830 [2024-11-26 18:02:23.616836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:06.830 [2024-11-26 18:02:23.616845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:06.830 [2024-11-26 18:02:23.616857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:06.830 [2024-11-26 18:02:23.616867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.830 [2024-11-26 18:02:23.616880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:06.830 [2024-11-26 18:02:23.616894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:16:06.830 [2024-11-26 18:02:23.616910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.618670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.830 [2024-11-26 18:02:23.618698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:06.830 [2024-11-26 18:02:23.618710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:16:06.830 [2024-11-26 18:02:23.618723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.618800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.830 [2024-11-26 18:02:23.618822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:06.830 [2024-11-26 18:02:23.618841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:06.830 [2024-11-26 18:02:23.618854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.625930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.625959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:06.830 [2024-11-26 18:02:23.625971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.625984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.626032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.626049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:06.830 [2024-11-26 18:02:23.626060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.626083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.626177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.626194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:06.830 [2024-11-26 18:02:23.626205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.626218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.626237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.626254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:06.830 [2024-11-26 18:02:23.626267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.626280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.638523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.638579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:06.830 [2024-11-26 18:02:23.638594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.638608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:06.830 [2024-11-26 18:02:23.643158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.830 [2024-11-26 18:02:23.643265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.830 [2024-11-26 18:02:23.643348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.830 [2024-11-26 18:02:23.643489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:06.830 [2024-11-26 18:02:23.643566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.830 [2024-11-26 18:02:23.643674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:06.830 [2024-11-26 18:02:23.643756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.830 [2024-11-26 18:02:23.643767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:06.830 [2024-11-26 18:02:23.643783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.830 [2024-11-26 18:02:23.643914] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 250.555 ms, result 0 00:16:06.830 true 00:16:06.830 18:02:23 -- ftl/bdevperf.sh@37 -- # killprocess 82997 00:16:06.830 18:02:23 -- common/autotest_common.sh@936 -- # '[' -z 82997 ']' 00:16:06.830 18:02:23 -- common/autotest_common.sh@940 -- # kill -0 82997 00:16:06.830 18:02:23 -- common/autotest_common.sh@941 -- # uname 00:16:06.830 18:02:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.830 18:02:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82997 00:16:06.830 killing process with pid 82997 00:16:06.830 Received shutdown signal, test time was about 4.000000 seconds 00:16:06.830 00:16:06.830 Latency(us) 00:16:06.830 [2024-11-26T18:02:23.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.830 [2024-11-26T18:02:23.756Z] =================================================================================================================== 00:16:06.830 [2024-11-26T18:02:23.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.830 18:02:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.830 18:02:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.830 18:02:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82997' 00:16:06.830 18:02:23 -- common/autotest_common.sh@955 -- # kill 82997 00:16:06.830 18:02:23 -- common/autotest_common.sh@960 -- # wait 82997 00:16:07.399 18:02:24 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:07.399 18:02:24 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:07.399 18:02:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.399 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 Remove shared memory files 00:16:07.658 18:02:24 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:07.658 18:02:24 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:07.658 18:02:24 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:07.658 18:02:24 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:07.658 18:02:24 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:07.658 18:02:24 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:07.658 18:02:24 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:07.658 ************************************ 00:16:07.658 END TEST ftl_bdevperf 00:16:07.658 ************************************ 00:16:07.658 00:16:07.658 real 0m21.835s 00:16:07.658 user 0m24.485s 00:16:07.658 sys 0m1.148s 00:16:07.658 18:02:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:07.658 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 18:02:24 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:07.658 18:02:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:07.658 18:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.658 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.658 ************************************ 00:16:07.658 START TEST ftl_trim 00:16:07.658 ************************************ 00:16:07.658 18:02:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:07.918 * Looking for test storage... 00:16:07.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:07.918 18:02:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:07.918 18:02:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:07.918 18:02:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:07.918 18:02:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:07.918 18:02:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:07.918 18:02:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:07.918 18:02:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:07.918 18:02:24 -- scripts/common.sh@335 -- # IFS=.-: 00:16:07.918 18:02:24 -- scripts/common.sh@335 -- # read -ra ver1 00:16:07.918 18:02:24 -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.918 18:02:24 -- scripts/common.sh@336 -- # read -ra ver2 00:16:07.918 18:02:24 -- scripts/common.sh@337 -- # local 'op=<' 00:16:07.918 18:02:24 -- scripts/common.sh@339 -- # ver1_l=2 00:16:07.918 18:02:24 -- scripts/common.sh@340 -- # ver2_l=1 00:16:07.918 18:02:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:07.918 18:02:24 -- scripts/common.sh@343 -- # case "$op" in 00:16:07.918 18:02:24 -- scripts/common.sh@344 -- # : 1 00:16:07.918 18:02:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:07.918 18:02:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.918 18:02:24 -- scripts/common.sh@364 -- # decimal 1 00:16:07.918 18:02:24 -- scripts/common.sh@352 -- # local d=1 00:16:07.918 18:02:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.918 18:02:24 -- scripts/common.sh@354 -- # echo 1 00:16:07.918 18:02:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:07.918 18:02:24 -- scripts/common.sh@365 -- # decimal 2 00:16:07.918 18:02:24 -- scripts/common.sh@352 -- # local d=2 00:16:07.918 18:02:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.918 18:02:24 -- scripts/common.sh@354 -- # echo 2 00:16:07.918 18:02:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:07.918 18:02:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:07.918 18:02:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:07.918 18:02:24 -- scripts/common.sh@367 -- # return 0 00:16:07.918 18:02:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.918 18:02:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.918 --rc genhtml_branch_coverage=1 00:16:07.918 --rc genhtml_function_coverage=1 00:16:07.918 --rc genhtml_legend=1 00:16:07.918 --rc geninfo_all_blocks=1 00:16:07.918 --rc geninfo_unexecuted_blocks=1 00:16:07.918 00:16:07.918 ' 00:16:07.918 18:02:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.918 --rc genhtml_branch_coverage=1 00:16:07.918 --rc genhtml_function_coverage=1 00:16:07.918 --rc genhtml_legend=1 00:16:07.918 --rc geninfo_all_blocks=1 00:16:07.918 --rc geninfo_unexecuted_blocks=1 00:16:07.918 00:16:07.918 ' 00:16:07.918 18:02:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.918 --rc genhtml_branch_coverage=1 00:16:07.918 --rc genhtml_function_coverage=1 00:16:07.918 --rc genhtml_legend=1 00:16:07.918 --rc geninfo_all_blocks=1 00:16:07.918 --rc geninfo_unexecuted_blocks=1 00:16:07.918 00:16:07.918 ' 00:16:07.918 18:02:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:07.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.918 --rc genhtml_branch_coverage=1 00:16:07.918 --rc genhtml_function_coverage=1 00:16:07.918 --rc genhtml_legend=1 00:16:07.918 --rc geninfo_all_blocks=1 00:16:07.918 --rc geninfo_unexecuted_blocks=1 00:16:07.918 00:16:07.918 ' 00:16:07.918 18:02:24 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:07.918 18:02:24 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:07.918 18:02:24 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:07.918 18:02:24 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:07.918 18:02:24 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:07.918 18:02:24 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:07.918 18:02:24 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.918 18:02:24 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:07.918 18:02:24 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:07.918 18:02:24 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:07.918 18:02:24 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:07.918 18:02:24 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:07.918 18:02:24 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:07.918 18:02:24 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:07.918 18:02:24 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:07.918 18:02:24 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:07.918 18:02:24 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:07.918 18:02:24 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:07.918 18:02:24 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:07.918 18:02:24 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:07.918 18:02:24 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:07.918 18:02:24 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:07.918 18:02:24 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:07.919 18:02:24 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:07.919 18:02:24 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:07.919 18:02:24 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:07.919 18:02:24 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:07.919 18:02:24 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:07.919 18:02:24 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:07.919 18:02:24 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.919 18:02:24 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:07.919 18:02:24 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:07.919 18:02:24 -- ftl/trim.sh@25 -- # timeout=240 00:16:07.919 18:02:24 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:07.919 18:02:24 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:07.919 18:02:24 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:07.919 18:02:24 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:07.919 18:02:24 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:07.919 18:02:24 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:07.919 18:02:24 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:07.919 18:02:24 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:07.919 18:02:24 -- ftl/trim.sh@40 -- # svcpid=83372 00:16:07.919 18:02:24 -- ftl/trim.sh@41 -- # waitforlisten 83372 00:16:07.919 18:02:24 -- common/autotest_common.sh@829 -- # '[' -z 83372 ']' 00:16:07.919 18:02:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.919 18:02:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.919 18:02:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.919 18:02:24 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:07.919 18:02:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.919 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.178 [2024-11-26 18:02:24.847017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:08.178 [2024-11-26 18:02:24.847250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83372 ] 00:16:08.178 [2024-11-26 18:02:25.005793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:08.178 [2024-11-26 18:02:25.048144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.178 [2024-11-26 18:02:25.048479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.178 [2024-11-26 18:02:25.048611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.178 [2024-11-26 18:02:25.048720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.746 18:02:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.746 18:02:25 -- common/autotest_common.sh@862 -- # return 0 00:16:08.746 18:02:25 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:08.746 18:02:25 -- ftl/common.sh@54 -- # local name=nvme0 00:16:08.746 18:02:25 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:08.746 18:02:25 -- ftl/common.sh@56 -- # local size=103424 00:16:08.746 18:02:25 -- ftl/common.sh@59 -- # local base_bdev 00:16:08.746 18:02:25 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:09.314 18:02:25 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:09.314 18:02:25 -- ftl/common.sh@62 -- # local base_size 00:16:09.314 18:02:25 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:09.314 18:02:25 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:09.314 18:02:25 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:09.314 18:02:25 -- common/autotest_common.sh@1369 -- # local bs 00:16:09.314 18:02:25 -- common/autotest_common.sh@1370 -- # local nb 00:16:09.314 18:02:25 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:09.314 18:02:26 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:09.314 { 00:16:09.314 "name": "nvme0n1", 00:16:09.314 "aliases": [ 00:16:09.314 "4f8217e7-e020-4208-86c6-1f8f8a4e117b" 00:16:09.314 ], 00:16:09.314 "product_name": "NVMe disk", 00:16:09.314 "block_size": 4096, 00:16:09.314 "num_blocks": 1310720, 00:16:09.314 "uuid": "4f8217e7-e020-4208-86c6-1f8f8a4e117b", 00:16:09.314 "assigned_rate_limits": { 00:16:09.314 "rw_ios_per_sec": 0, 00:16:09.314 "rw_mbytes_per_sec": 0, 00:16:09.314 "r_mbytes_per_sec": 0, 00:16:09.314 "w_mbytes_per_sec": 0 00:16:09.314 }, 00:16:09.314 "claimed": true, 00:16:09.314 "claim_type": "read_many_write_one", 00:16:09.314 "zoned": false, 00:16:09.314 "supported_io_types": { 00:16:09.314 "read": true, 00:16:09.314 "write": true, 00:16:09.314 "unmap": true, 00:16:09.314 "write_zeroes": true, 00:16:09.314 "flush": true, 00:16:09.314 "reset": true, 00:16:09.314 "compare": true, 00:16:09.314 "compare_and_write": false, 00:16:09.314 "abort": true, 00:16:09.314 "nvme_admin": true, 00:16:09.314 "nvme_io": true 00:16:09.314 }, 00:16:09.314 "driver_specific": { 00:16:09.314 "nvme": [ 00:16:09.314 { 00:16:09.314 "pci_address": "0000:00:07.0", 00:16:09.314 "trid": { 00:16:09.314 "trtype": "PCIe", 00:16:09.314 "traddr": "0000:00:07.0" 00:16:09.314 }, 00:16:09.314 "ctrlr_data": { 00:16:09.314 "cntlid": 0, 00:16:09.314 "vendor_id": "0x1b36", 00:16:09.314 "model_number": "QEMU NVMe Ctrl", 00:16:09.314 "serial_number": "12341", 00:16:09.314 "firmware_revision": "8.0.0", 00:16:09.314 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:09.314 "oacs": { 00:16:09.314 "security": 0, 00:16:09.314 "format": 1, 00:16:09.314 "firmware": 0, 00:16:09.314 "ns_manage": 1 00:16:09.314 }, 00:16:09.314 "multi_ctrlr": false, 00:16:09.314 "ana_reporting": false 00:16:09.314 }, 00:16:09.314 "vs": { 00:16:09.314 "nvme_version": "1.4" 00:16:09.314 }, 00:16:09.314 "ns_data": { 00:16:09.314 "id": 1, 00:16:09.314 "can_share": false 00:16:09.314 } 00:16:09.314 } 00:16:09.314 ], 00:16:09.314 "mp_policy": "active_passive" 00:16:09.314 } 00:16:09.314 } 00:16:09.314 ]' 00:16:09.314 18:02:26 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:09.314 18:02:26 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:09.314 18:02:26 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:09.314 18:02:26 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:09.314 18:02:26 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:09.314 18:02:26 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:09.314 18:02:26 -- ftl/common.sh@63 -- # base_size=5120 00:16:09.314 18:02:26 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:09.314 18:02:26 -- ftl/common.sh@67 -- # clear_lvols 00:16:09.314 18:02:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:09.314 18:02:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:09.574 18:02:26 -- ftl/common.sh@28 -- # stores=030837ec-d3a0-4634-b12d-f9ae321789b0 00:16:09.574 18:02:26 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:09.574 18:02:26 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 030837ec-d3a0-4634-b12d-f9ae321789b0 00:16:09.833 18:02:26 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:10.117 18:02:26 -- ftl/common.sh@68 -- # lvs=63c94432-7008-451d-9258-795027ee739a 00:16:10.117 18:02:26 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63c94432-7008-451d-9258-795027ee739a 00:16:10.117 18:02:27 -- ftl/trim.sh@43 -- # split_bdev=8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.117 18:02:27 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.117 18:02:27 -- ftl/common.sh@35 -- # local name=nvc0 00:16:10.117 18:02:27 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:10.117 18:02:27 -- ftl/common.sh@37 -- # local base_bdev=8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.117 18:02:27 -- ftl/common.sh@38 -- # local cache_size= 00:16:10.382 18:02:27 -- ftl/common.sh@41 -- # get_bdev_size 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.382 18:02:27 -- common/autotest_common.sh@1367 -- # local bdev_name=8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.383 18:02:27 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:10.383 18:02:27 -- common/autotest_common.sh@1369 -- # local bs 00:16:10.383 18:02:27 -- common/autotest_common.sh@1370 -- # local nb 00:16:10.383 18:02:27 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.383 18:02:27 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:10.383 { 00:16:10.383 "name": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:10.383 "aliases": [ 00:16:10.383 "lvs/nvme0n1p0" 00:16:10.383 ], 00:16:10.383 "product_name": "Logical Volume", 00:16:10.383 "block_size": 4096, 00:16:10.383 "num_blocks": 26476544, 00:16:10.383 "uuid": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:10.383 "assigned_rate_limits": { 00:16:10.383 "rw_ios_per_sec": 0, 00:16:10.383 "rw_mbytes_per_sec": 0, 00:16:10.383 "r_mbytes_per_sec": 0, 00:16:10.383 "w_mbytes_per_sec": 0 00:16:10.383 }, 00:16:10.383 "claimed": false, 00:16:10.383 "zoned": false, 00:16:10.383 "supported_io_types": { 00:16:10.383 "read": true, 00:16:10.383 "write": true, 00:16:10.383 "unmap": true, 00:16:10.383 "write_zeroes": true, 00:16:10.383 "flush": false, 00:16:10.383 "reset": true, 00:16:10.383 "compare": false, 00:16:10.383 "compare_and_write": false, 00:16:10.383 "abort": false, 00:16:10.383 "nvme_admin": false, 00:16:10.383 "nvme_io": false 00:16:10.383 }, 00:16:10.383 "driver_specific": { 00:16:10.383 "lvol": { 00:16:10.383 "lvol_store_uuid": "63c94432-7008-451d-9258-795027ee739a", 00:16:10.383 "base_bdev": "nvme0n1", 00:16:10.383 "thin_provision": true, 00:16:10.383 "snapshot": false, 00:16:10.383 "clone": false, 00:16:10.383 "esnap_clone": false 00:16:10.383 } 00:16:10.383 } 00:16:10.383 } 00:16:10.383 ]' 00:16:10.383 18:02:27 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:10.383 18:02:27 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:10.383 18:02:27 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:10.642 18:02:27 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:10.642 18:02:27 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:10.642 18:02:27 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:10.642 18:02:27 -- ftl/common.sh@41 -- # local base_size=5171 00:16:10.642 18:02:27 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:10.642 18:02:27 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:10.900 18:02:27 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:10.900 18:02:27 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:10.900 18:02:27 -- ftl/common.sh@48 -- # get_bdev_size 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.900 18:02:27 -- common/autotest_common.sh@1367 -- # local bdev_name=8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.900 18:02:27 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:10.900 18:02:27 -- common/autotest_common.sh@1369 -- # local bs 00:16:10.900 18:02:27 -- common/autotest_common.sh@1370 -- # local nb 00:16:10.900 18:02:27 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:10.900 18:02:27 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:10.900 { 00:16:10.900 "name": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:10.900 "aliases": [ 00:16:10.900 "lvs/nvme0n1p0" 00:16:10.900 ], 00:16:10.900 "product_name": "Logical Volume", 00:16:10.900 "block_size": 4096, 00:16:10.900 "num_blocks": 26476544, 00:16:10.900 "uuid": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:10.900 "assigned_rate_limits": { 00:16:10.900 "rw_ios_per_sec": 0, 00:16:10.900 "rw_mbytes_per_sec": 0, 00:16:10.900 "r_mbytes_per_sec": 0, 00:16:10.900 "w_mbytes_per_sec": 0 00:16:10.900 }, 00:16:10.900 "claimed": false, 00:16:10.900 "zoned": false, 00:16:10.900 "supported_io_types": { 00:16:10.900 "read": true, 00:16:10.900 "write": true, 00:16:10.900 "unmap": true, 00:16:10.900 "write_zeroes": true, 00:16:10.900 "flush": false, 00:16:10.900 "reset": true, 00:16:10.901 "compare": false, 00:16:10.901 "compare_and_write": false, 00:16:10.901 "abort": false, 00:16:10.901 "nvme_admin": false, 00:16:10.901 "nvme_io": false 00:16:10.901 }, 00:16:10.901 "driver_specific": { 00:16:10.901 "lvol": { 00:16:10.901 "lvol_store_uuid": "63c94432-7008-451d-9258-795027ee739a", 00:16:10.901 "base_bdev": "nvme0n1", 00:16:10.901 "thin_provision": true, 00:16:10.901 "snapshot": false, 00:16:10.901 "clone": false, 00:16:10.901 "esnap_clone": false 00:16:10.901 } 00:16:10.901 } 00:16:10.901 } 00:16:10.901 ]' 00:16:10.901 18:02:27 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:11.159 18:02:27 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:11.159 18:02:27 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:11.159 18:02:27 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:11.159 18:02:27 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:11.159 18:02:27 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:11.159 18:02:27 -- ftl/common.sh@48 -- # cache_size=5171 00:16:11.159 18:02:27 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:11.418 18:02:28 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:11.418 18:02:28 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:11.418 18:02:28 -- ftl/trim.sh@47 -- # get_bdev_size 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:11.418 18:02:28 -- common/autotest_common.sh@1367 -- # local bdev_name=8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:11.418 18:02:28 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:11.418 18:02:28 -- common/autotest_common.sh@1369 -- # local bs 00:16:11.418 18:02:28 -- common/autotest_common.sh@1370 -- # local nb 00:16:11.418 18:02:28 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8fd49130-eaa2-401e-976b-bb9c028257d0 00:16:11.418 18:02:28 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:11.418 { 00:16:11.418 "name": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:11.418 "aliases": [ 00:16:11.418 "lvs/nvme0n1p0" 00:16:11.418 ], 00:16:11.418 "product_name": "Logical Volume", 00:16:11.418 "block_size": 4096, 00:16:11.418 "num_blocks": 26476544, 00:16:11.418 "uuid": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:11.418 "assigned_rate_limits": { 00:16:11.418 "rw_ios_per_sec": 0, 00:16:11.418 "rw_mbytes_per_sec": 0, 00:16:11.418 "r_mbytes_per_sec": 0, 00:16:11.418 "w_mbytes_per_sec": 0 00:16:11.418 }, 00:16:11.418 "claimed": false, 00:16:11.418 "zoned": false, 00:16:11.418 "supported_io_types": { 00:16:11.418 "read": true, 00:16:11.418 "write": true, 00:16:11.418 "unmap": true, 00:16:11.418 "write_zeroes": true, 00:16:11.418 "flush": false, 00:16:11.418 "reset": true, 00:16:11.418 "compare": false, 00:16:11.418 "compare_and_write": false, 00:16:11.418 "abort": false, 00:16:11.418 "nvme_admin": false, 00:16:11.418 "nvme_io": false 00:16:11.418 }, 00:16:11.418 "driver_specific": { 00:16:11.418 "lvol": { 00:16:11.418 "lvol_store_uuid": "63c94432-7008-451d-9258-795027ee739a", 00:16:11.418 "base_bdev": "nvme0n1", 00:16:11.418 "thin_provision": true, 00:16:11.418 "snapshot": false, 00:16:11.418 "clone": false, 00:16:11.418 "esnap_clone": false 00:16:11.418 } 00:16:11.418 } 00:16:11.418 } 00:16:11.418 ]' 00:16:11.418 18:02:28 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:11.418 18:02:28 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:11.418 18:02:28 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:11.678 18:02:28 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:11.678 18:02:28 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:11.678 18:02:28 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:11.678 18:02:28 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:11.678 18:02:28 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8fd49130-eaa2-401e-976b-bb9c028257d0 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:11.678 [2024-11-26 18:02:28.551333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.551619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:11.678 [2024-11-26 18:02:28.551656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:11.678 [2024-11-26 18:02:28.551684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.554633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.554795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.678 [2024-11-26 18:02:28.554828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:16:11.678 [2024-11-26 18:02:28.554854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.555000] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:11.678 [2024-11-26 18:02:28.555302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:11.678 [2024-11-26 18:02:28.555327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.555340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.678 [2024-11-26 18:02:28.555355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:16:11.678 [2024-11-26 18:02:28.555366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.555516] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e724961a-19a7-434a-83e9-346380d427cd 00:16:11.678 [2024-11-26 18:02:28.556937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.556973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:11.678 [2024-11-26 18:02:28.556987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:11.678 [2024-11-26 18:02:28.557000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.564713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.564919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.678 [2024-11-26 18:02:28.564946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.638 ms 00:16:11.678 [2024-11-26 18:02:28.564965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.565159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.565178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.678 [2024-11-26 18:02:28.565191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:11.678 [2024-11-26 18:02:28.565204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.565246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.565263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:11.678 [2024-11-26 18:02:28.565275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:11.678 [2024-11-26 18:02:28.565289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.565336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:11.678 [2024-11-26 18:02:28.567330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.567364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.678 [2024-11-26 18:02:28.567381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:16:11.678 [2024-11-26 18:02:28.567392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.567484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.678 [2024-11-26 18:02:28.567498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:11.678 [2024-11-26 18:02:28.567516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:11.678 [2024-11-26 18:02:28.567527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.678 [2024-11-26 18:02:28.567566] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:11.678 [2024-11-26 18:02:28.567701] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:11.678 [2024-11-26 18:02:28.567723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:11.678 [2024-11-26 18:02:28.567738] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:11.678 [2024-11-26 18:02:28.567755] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:11.679 [2024-11-26 18:02:28.567768] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:11.679 [2024-11-26 18:02:28.567789] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:11.679 [2024-11-26 18:02:28.567800] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:11.679 [2024-11-26 18:02:28.567814] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:11.679 [2024-11-26 18:02:28.567825] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:11.679 [2024-11-26 18:02:28.567839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.567862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:11.679 [2024-11-26 18:02:28.567878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:11.679 [2024-11-26 18:02:28.567900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.679 [2024-11-26 18:02:28.567983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.567995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:11.679 [2024-11-26 18:02:28.568009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:11.679 [2024-11-26 18:02:28.568023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.679 [2024-11-26 18:02:28.568133] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:11.679 [2024-11-26 18:02:28.568146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:11.679 [2024-11-26 18:02:28.568160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:11.679 [2024-11-26 18:02:28.568196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:11.679 [2024-11-26 18:02:28.568236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.679 [2024-11-26 18:02:28.568259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:11.679 [2024-11-26 18:02:28.568270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:11.679 [2024-11-26 18:02:28.568285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.679 [2024-11-26 18:02:28.568297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:11.679 [2024-11-26 18:02:28.568310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:11.679 [2024-11-26 18:02:28.568320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:11.679 [2024-11-26 18:02:28.568343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:11.679 [2024-11-26 18:02:28.568356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:11.679 [2024-11-26 18:02:28.568379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:11.679 [2024-11-26 18:02:28.568390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:11.679 [2024-11-26 18:02:28.568414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:11.679 [2024-11-26 18:02:28.568450] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:11.679 [2024-11-26 18:02:28.568515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:11.679 [2024-11-26 18:02:28.568552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:11.679 [2024-11-26 18:02:28.568586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.679 [2024-11-26 18:02:28.568609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:11.679 [2024-11-26 18:02:28.568622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:11.679 [2024-11-26 18:02:28.568642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.679 [2024-11-26 18:02:28.568653] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:11.679 [2024-11-26 18:02:28.568664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:11.679 [2024-11-26 18:02:28.568676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.679 [2024-11-26 18:02:28.568706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:11.679 [2024-11-26 18:02:28.568717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:11.679 [2024-11-26 18:02:28.568732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:11.679 [2024-11-26 18:02:28.568742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:11.679 [2024-11-26 18:02:28.568754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:11.679 [2024-11-26 18:02:28.568764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:11.679 [2024-11-26 18:02:28.568777] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:11.679 [2024-11-26 18:02:28.568790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.679 [2024-11-26 18:02:28.568805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:11.679 [2024-11-26 18:02:28.568817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:11.679 [2024-11-26 18:02:28.568830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:11.679 [2024-11-26 18:02:28.568841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:11.679 [2024-11-26 18:02:28.568856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:11.679 [2024-11-26 18:02:28.568867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:11.679 [2024-11-26 18:02:28.568880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:11.679 [2024-11-26 18:02:28.568892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:11.679 [2024-11-26 18:02:28.568907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:11.679 [2024-11-26 18:02:28.568918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:11.679 [2024-11-26 18:02:28.568931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:11.679 [2024-11-26 18:02:28.568942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:11.679 [2024-11-26 18:02:28.568955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:11.679 [2024-11-26 18:02:28.568966] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:11.679 [2024-11-26 18:02:28.568980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.679 [2024-11-26 18:02:28.568995] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:11.679 [2024-11-26 18:02:28.569008] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:11.679 [2024-11-26 18:02:28.569019] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:11.679 [2024-11-26 18:02:28.569033] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:11.679 [2024-11-26 18:02:28.569044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.569058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:11.679 [2024-11-26 18:02:28.569069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:16:11.679 [2024-11-26 18:02:28.569081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.679 [2024-11-26 18:02:28.577857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.577898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.679 [2024-11-26 18:02:28.577913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:16:11.679 [2024-11-26 18:02:28.577927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.679 [2024-11-26 18:02:28.578073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.578089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:11.679 [2024-11-26 18:02:28.578115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:11.679 [2024-11-26 18:02:28.578129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.679 [2024-11-26 18:02:28.590947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.679 [2024-11-26 18:02:28.591007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.679 [2024-11-26 18:02:28.591024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:16:11.679 [2024-11-26 18:02:28.591039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.680 [2024-11-26 18:02:28.591141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.680 [2024-11-26 18:02:28.591158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.680 [2024-11-26 18:02:28.591185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:11.680 [2024-11-26 18:02:28.591199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.680 [2024-11-26 18:02:28.591707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.680 [2024-11-26 18:02:28.591728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.680 [2024-11-26 18:02:28.591743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:16:11.680 [2024-11-26 18:02:28.591756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.680 [2024-11-26 18:02:28.591888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.680 [2024-11-26 18:02:28.591905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.680 [2024-11-26 18:02:28.591915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:16:11.680 [2024-11-26 18:02:28.591931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.939 [2024-11-26 18:02:28.609398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.939 [2024-11-26 18:02:28.609480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.939 [2024-11-26 18:02:28.609502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:16:11.939 [2024-11-26 18:02:28.609524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.939 [2024-11-26 18:02:28.619527] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:11.939 [2024-11-26 18:02:28.637827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.939 [2024-11-26 18:02:28.637894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:11.939 [2024-11-26 18:02:28.637915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.130 ms 00:16:11.939 [2024-11-26 18:02:28.637954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.939 [2024-11-26 18:02:28.726006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.939 [2024-11-26 18:02:28.726086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:11.939 [2024-11-26 18:02:28.726106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.060 ms 00:16:11.939 [2024-11-26 18:02:28.726117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.939 [2024-11-26 18:02:28.726191] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:11.939 [2024-11-26 18:02:28.726207] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:16.130 [2024-11-26 18:02:32.591466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.130 [2024-11-26 18:02:32.591534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:16.130 [2024-11-26 18:02:32.591557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3871.549 ms 00:16:16.130 [2024-11-26 18:02:32.591568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.130 [2024-11-26 18:02:32.591774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.591788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:16.131 [2024-11-26 18:02:32.591803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:16:16.131 [2024-11-26 18:02:32.591827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.595794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.595833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:16.131 [2024-11-26 18:02:32.595852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:16:16.131 [2024-11-26 18:02:32.595878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.598769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.598802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:16.131 [2024-11-26 18:02:32.598818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:16:16.131 [2024-11-26 18:02:32.598828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.599010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.599024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:16.131 [2024-11-26 18:02:32.599038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:16.131 [2024-11-26 18:02:32.599047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.628959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.629003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:16.131 [2024-11-26 18:02:32.629022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.907 ms 00:16:16.131 [2024-11-26 18:02:32.629036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.633685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.633725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:16.131 [2024-11-26 18:02:32.633747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:16:16.131 [2024-11-26 18:02:32.633757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.638106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.638147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:16.131 [2024-11-26 18:02:32.638163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:16:16.131 [2024-11-26 18:02:32.638189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.642071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.642107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:16.131 [2024-11-26 18:02:32.642122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.827 ms 00:16:16.131 [2024-11-26 18:02:32.642132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.642199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.642211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:16.131 [2024-11-26 18:02:32.642239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:16.131 [2024-11-26 18:02:32.642249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.642355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.131 [2024-11-26 18:02:32.642368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:16.131 [2024-11-26 18:02:32.642383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:16.131 [2024-11-26 18:02:32.642394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.131 [2024-11-26 18:02:32.643593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.131 [2024-11-26 18:02:32.644574] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4098.601 ms, result 0 00:16:16.131 [2024-11-26 18:02:32.645370] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:16.131 { 00:16:16.131 "name": "ftl0", 00:16:16.131 "uuid": "e724961a-19a7-434a-83e9-346380d427cd" 00:16:16.131 } 00:16:16.131 18:02:32 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:16.131 18:02:32 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:16.131 18:02:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:16.131 18:02:32 -- common/autotest_common.sh@899 -- # local i 00:16:16.131 18:02:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:16.131 18:02:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:16.131 18:02:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:16.131 18:02:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:16.131 [ 00:16:16.131 { 00:16:16.131 "name": "ftl0", 00:16:16.131 "aliases": [ 00:16:16.131 "e724961a-19a7-434a-83e9-346380d427cd" 00:16:16.131 ], 00:16:16.131 "product_name": "FTL disk", 00:16:16.131 "block_size": 4096, 00:16:16.131 "num_blocks": 23592960, 00:16:16.131 "uuid": "e724961a-19a7-434a-83e9-346380d427cd", 00:16:16.131 "assigned_rate_limits": { 00:16:16.131 "rw_ios_per_sec": 0, 00:16:16.131 "rw_mbytes_per_sec": 0, 00:16:16.131 "r_mbytes_per_sec": 0, 00:16:16.131 "w_mbytes_per_sec": 0 00:16:16.131 }, 00:16:16.131 "claimed": false, 00:16:16.131 "zoned": false, 00:16:16.131 "supported_io_types": { 00:16:16.131 "read": true, 00:16:16.131 "write": true, 00:16:16.131 "unmap": true, 00:16:16.131 "write_zeroes": true, 00:16:16.131 "flush": true, 00:16:16.131 "reset": false, 00:16:16.131 "compare": false, 00:16:16.131 "compare_and_write": false, 00:16:16.131 "abort": false, 00:16:16.131 "nvme_admin": false, 00:16:16.131 "nvme_io": false 00:16:16.131 }, 00:16:16.131 "driver_specific": { 00:16:16.131 "ftl": { 00:16:16.131 "base_bdev": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:16.131 "cache": "nvc0n1p0" 00:16:16.131 } 00:16:16.131 } 00:16:16.131 } 00:16:16.131 ] 00:16:16.390 18:02:33 -- common/autotest_common.sh@905 -- # return 0 00:16:16.390 18:02:33 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:16.390 18:02:33 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:16.390 18:02:33 -- ftl/trim.sh@56 -- # echo ']}' 00:16:16.390 18:02:33 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:16.650 18:02:33 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:16.650 { 00:16:16.650 "name": "ftl0", 00:16:16.650 "aliases": [ 00:16:16.650 "e724961a-19a7-434a-83e9-346380d427cd" 00:16:16.650 ], 00:16:16.650 "product_name": "FTL disk", 00:16:16.650 "block_size": 4096, 00:16:16.650 "num_blocks": 23592960, 00:16:16.650 "uuid": "e724961a-19a7-434a-83e9-346380d427cd", 00:16:16.650 "assigned_rate_limits": { 00:16:16.650 "rw_ios_per_sec": 0, 00:16:16.650 "rw_mbytes_per_sec": 0, 00:16:16.650 "r_mbytes_per_sec": 0, 00:16:16.650 "w_mbytes_per_sec": 0 00:16:16.650 }, 00:16:16.650 "claimed": false, 00:16:16.650 "zoned": false, 00:16:16.650 "supported_io_types": { 00:16:16.650 "read": true, 00:16:16.650 "write": true, 00:16:16.650 "unmap": true, 00:16:16.650 "write_zeroes": true, 00:16:16.650 "flush": true, 00:16:16.650 "reset": false, 00:16:16.650 "compare": false, 00:16:16.650 "compare_and_write": false, 00:16:16.650 "abort": false, 00:16:16.650 "nvme_admin": false, 00:16:16.650 "nvme_io": false 00:16:16.650 }, 00:16:16.650 "driver_specific": { 00:16:16.650 "ftl": { 00:16:16.650 "base_bdev": "8fd49130-eaa2-401e-976b-bb9c028257d0", 00:16:16.650 "cache": "nvc0n1p0" 00:16:16.650 } 00:16:16.650 } 00:16:16.650 } 00:16:16.650 ]' 00:16:16.650 18:02:33 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:16.650 18:02:33 -- ftl/trim.sh@60 -- # nb=23592960 00:16:16.650 18:02:33 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:16.912 [2024-11-26 18:02:33.695621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.695680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:16.912 [2024-11-26 18:02:33.695700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:16.912 [2024-11-26 18:02:33.695714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.695749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:16.912 [2024-11-26 18:02:33.696423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.696444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:16.912 [2024-11-26 18:02:33.696472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:16:16.912 [2024-11-26 18:02:33.696484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.697017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.697035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:16.912 [2024-11-26 18:02:33.697052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:16:16.912 [2024-11-26 18:02:33.697062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.699890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.699916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:16.912 [2024-11-26 18:02:33.699931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:16:16.912 [2024-11-26 18:02:33.699941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.705696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.705754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:16.912 [2024-11-26 18:02:33.705772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:16:16.912 [2024-11-26 18:02:33.705782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.707557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.707593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:16.912 [2024-11-26 18:02:33.707608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:16:16.912 [2024-11-26 18:02:33.707618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.712728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.712769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:16.912 [2024-11-26 18:02:33.712789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:16:16.912 [2024-11-26 18:02:33.712813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.712997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.713010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:16.912 [2024-11-26 18:02:33.713024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:16.912 [2024-11-26 18:02:33.713034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.714860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.715062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:16.912 [2024-11-26 18:02:33.715091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:16:16.912 [2024-11-26 18:02:33.715103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.716716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.716751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:16.912 [2024-11-26 18:02:33.716767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:16:16.912 [2024-11-26 18:02:33.716777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.717941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.717974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:16.912 [2024-11-26 18:02:33.717993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:16:16.912 [2024-11-26 18:02:33.718004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.719394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.912 [2024-11-26 18:02:33.719429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:16.912 [2024-11-26 18:02:33.719445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:16:16.912 [2024-11-26 18:02:33.719472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.912 [2024-11-26 18:02:33.719522] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:16.912 [2024-11-26 18:02:33.719551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:16.912 [2024-11-26 18:02:33.719874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.719995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:16.913 [2024-11-26 18:02:33.720950] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:16.913 [2024-11-26 18:02:33.720964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:16.913 [2024-11-26 18:02:33.720976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:16.913 [2024-11-26 18:02:33.720990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:16.913 [2024-11-26 18:02:33.721001] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:16.913 [2024-11-26 18:02:33.721019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:16.913 [2024-11-26 18:02:33.721030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:16.913 [2024-11-26 18:02:33.721046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:16.913 [2024-11-26 18:02:33.721057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:16.913 [2024-11-26 18:02:33.721070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:16.913 [2024-11-26 18:02:33.721080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:16.913 [2024-11-26 18:02:33.721093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.913 [2024-11-26 18:02:33.721107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:16.913 [2024-11-26 18:02:33.721122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:16:16.913 [2024-11-26 18:02:33.721134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.913 [2024-11-26 18:02:33.723257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.913 [2024-11-26 18:02:33.723375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:16.914 [2024-11-26 18:02:33.723481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:16:16.914 [2024-11-26 18:02:33.723526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.723667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.914 [2024-11-26 18:02:33.723707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:16.914 [2024-11-26 18:02:33.723826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:16.914 [2024-11-26 18:02:33.723884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.731331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.731487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.914 [2024-11-26 18:02:33.731660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.731678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.731795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.731809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.914 [2024-11-26 18:02:33.731823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.731833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.731909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.731923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.914 [2024-11-26 18:02:33.731937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.731948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.731988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.732002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.914 [2024-11-26 18:02:33.732018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.732028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.747121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.747188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.914 [2024-11-26 18:02:33.747218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.747230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.751987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.914 [2024-11-26 18:02:33.752040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.914 [2024-11-26 18:02:33.752139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.914 [2024-11-26 18:02:33.752257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.914 [2024-11-26 18:02:33.752395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:16.914 [2024-11-26 18:02:33.752544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.914 [2024-11-26 18:02:33.752637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.914 [2024-11-26 18:02:33.752733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.914 [2024-11-26 18:02:33.752752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.914 [2024-11-26 18:02:33.752762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.914 [2024-11-26 18:02:33.752973] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.410 ms, result 0 00:16:16.914 true 00:16:16.914 18:02:33 -- ftl/trim.sh@63 -- # killprocess 83372 00:16:16.914 18:02:33 -- common/autotest_common.sh@936 -- # '[' -z 83372 ']' 00:16:16.914 18:02:33 -- common/autotest_common.sh@940 -- # kill -0 83372 00:16:16.914 18:02:33 -- common/autotest_common.sh@941 -- # uname 00:16:16.914 18:02:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.914 18:02:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83372 00:16:16.914 killing process with pid 83372 00:16:16.914 18:02:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.914 18:02:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.914 18:02:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83372' 00:16:16.914 18:02:33 -- common/autotest_common.sh@955 -- # kill 83372 00:16:16.914 18:02:33 -- common/autotest_common.sh@960 -- # wait 83372 00:16:20.201 18:02:36 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:21.138 65536+0 records in 00:16:21.138 65536+0 records out 00:16:21.138 268435456 bytes (268 MB, 256 MiB) copied, 1.03948 s, 258 MB/s 00:16:21.138 18:02:37 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:21.138 [2024-11-26 18:02:37.826215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:21.138 [2024-11-26 18:02:37.826340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83565 ] 00:16:21.138 [2024-11-26 18:02:37.977872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.138 [2024-11-26 18:02:38.020018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.420 [2024-11-26 18:02:38.121232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:21.420 [2024-11-26 18:02:38.121322] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:21.420 [2024-11-26 18:02:38.272746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.420 [2024-11-26 18:02:38.272955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.420 [2024-11-26 18:02:38.272984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:21.420 [2024-11-26 18:02:38.272996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.420 [2024-11-26 18:02:38.275406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.420 [2024-11-26 18:02:38.275589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.420 [2024-11-26 18:02:38.275612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:16:21.420 [2024-11-26 18:02:38.275622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.420 [2024-11-26 18:02:38.275762] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.420 [2024-11-26 18:02:38.275971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.420 [2024-11-26 18:02:38.275988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.420 [2024-11-26 18:02:38.275998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.420 [2024-11-26 18:02:38.276009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:21.420 [2024-11-26 18:02:38.276019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.420 [2024-11-26 18:02:38.277501] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:21.420 [2024-11-26 18:02:38.280118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.420 [2024-11-26 18:02:38.280155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:21.420 [2024-11-26 18:02:38.280168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:16:21.420 [2024-11-26 18:02:38.280178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.420 [2024-11-26 18:02:38.280243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.420 [2024-11-26 18:02:38.280257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:21.420 [2024-11-26 18:02:38.280268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:21.421 [2024-11-26 18:02:38.280277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.286919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.287065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.421 [2024-11-26 18:02:38.287099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.603 ms 00:16:21.421 [2024-11-26 18:02:38.287110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.287225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.287244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.421 [2024-11-26 18:02:38.287258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:21.421 [2024-11-26 18:02:38.287268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.287297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.287307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.421 [2024-11-26 18:02:38.287317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:21.421 [2024-11-26 18:02:38.287330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.287356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:21.421 [2024-11-26 18:02:38.288981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.289008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.421 [2024-11-26 18:02:38.289023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:16:21.421 [2024-11-26 18:02:38.289041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.289094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.289109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.421 [2024-11-26 18:02:38.289120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:21.421 [2024-11-26 18:02:38.289129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.289149] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:21.421 [2024-11-26 18:02:38.289177] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:21.421 [2024-11-26 18:02:38.289215] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:21.421 [2024-11-26 18:02:38.289241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:21.421 [2024-11-26 18:02:38.289307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:21.421 [2024-11-26 18:02:38.289327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.421 [2024-11-26 18:02:38.289339] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:21.421 [2024-11-26 18:02:38.289351] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289370] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289381] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:21.421 [2024-11-26 18:02:38.289391] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.421 [2024-11-26 18:02:38.289400] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:21.421 [2024-11-26 18:02:38.289415] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:21.421 [2024-11-26 18:02:38.289425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.289434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.421 [2024-11-26 18:02:38.289444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:21.421 [2024-11-26 18:02:38.289470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.289531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.289542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.421 [2024-11-26 18:02:38.289552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:21.421 [2024-11-26 18:02:38.289564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.289637] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.421 [2024-11-26 18:02:38.289649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.421 [2024-11-26 18:02:38.289659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.421 [2024-11-26 18:02:38.289688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.421 [2024-11-26 18:02:38.289718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.421 [2024-11-26 18:02:38.289736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.421 [2024-11-26 18:02:38.289746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:21.421 [2024-11-26 18:02:38.289754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.421 [2024-11-26 18:02:38.289763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.421 [2024-11-26 18:02:38.289772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:21.421 [2024-11-26 18:02:38.289784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.421 [2024-11-26 18:02:38.289802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:21.421 [2024-11-26 18:02:38.289811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:21.421 [2024-11-26 18:02:38.289829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:21.421 [2024-11-26 18:02:38.289837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.421 [2024-11-26 18:02:38.289855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.421 [2024-11-26 18:02:38.289881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.421 [2024-11-26 18:02:38.289907] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.421 [2024-11-26 18:02:38.289939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.421 [2024-11-26 18:02:38.289956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.421 [2024-11-26 18:02:38.289965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:21.421 [2024-11-26 18:02:38.289974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.421 [2024-11-26 18:02:38.289983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.421 [2024-11-26 18:02:38.289992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:21.421 [2024-11-26 18:02:38.290000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.421 [2024-11-26 18:02:38.290009] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.421 [2024-11-26 18:02:38.290019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.421 [2024-11-26 18:02:38.290028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.421 [2024-11-26 18:02:38.290038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.421 [2024-11-26 18:02:38.290048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.421 [2024-11-26 18:02:38.290057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.421 [2024-11-26 18:02:38.290066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.421 [2024-11-26 18:02:38.290078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.421 [2024-11-26 18:02:38.290087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.421 [2024-11-26 18:02:38.290096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.421 [2024-11-26 18:02:38.290106] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.421 [2024-11-26 18:02:38.290117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.421 [2024-11-26 18:02:38.290148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:21.421 [2024-11-26 18:02:38.290159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:21.421 [2024-11-26 18:02:38.290169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:21.421 [2024-11-26 18:02:38.290180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:21.421 [2024-11-26 18:02:38.290190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:21.421 [2024-11-26 18:02:38.290200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:21.421 [2024-11-26 18:02:38.290210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:21.421 [2024-11-26 18:02:38.290220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:21.421 [2024-11-26 18:02:38.290230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:21.421 [2024-11-26 18:02:38.290240] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:21.421 [2024-11-26 18:02:38.290250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:21.421 [2024-11-26 18:02:38.290263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:21.421 [2024-11-26 18:02:38.290275] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:21.421 [2024-11-26 18:02:38.290285] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.421 [2024-11-26 18:02:38.290295] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.421 [2024-11-26 18:02:38.290307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.421 [2024-11-26 18:02:38.290317] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.421 [2024-11-26 18:02:38.290327] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.421 [2024-11-26 18:02:38.290338] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.421 [2024-11-26 18:02:38.290349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.290367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.421 [2024-11-26 18:02:38.290378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:16:21.421 [2024-11-26 18:02:38.290387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.298968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.299098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.421 [2024-11-26 18:02:38.299169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.547 ms 00:16:21.421 [2024-11-26 18:02:38.299212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.299346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.299382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:21.421 [2024-11-26 18:02:38.299475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:21.421 [2024-11-26 18:02:38.299524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.322197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.322385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.421 [2024-11-26 18:02:38.322578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.657 ms 00:16:21.421 [2024-11-26 18:02:38.322640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.322764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.322819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.421 [2024-11-26 18:02:38.322943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:21.421 [2024-11-26 18:02:38.323011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.323564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.323713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.421 [2024-11-26 18:02:38.323834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:16:21.421 [2024-11-26 18:02:38.323887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.324104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.324170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.421 [2024-11-26 18:02:38.324290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:21.421 [2024-11-26 18:02:38.324343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.332370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.332414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.421 [2024-11-26 18:02:38.332428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.966 ms 00:16:21.421 [2024-11-26 18:02:38.332446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.421 [2024-11-26 18:02:38.335115] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:21.421 [2024-11-26 18:02:38.335154] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:21.421 [2024-11-26 18:02:38.335169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.421 [2024-11-26 18:02:38.335180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:21.421 [2024-11-26 18:02:38.335190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:16:21.421 [2024-11-26 18:02:38.335200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.348363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.348402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:21.682 [2024-11-26 18:02:38.348416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.137 ms 00:16:21.682 [2024-11-26 18:02:38.348436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.350280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.350318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:21.682 [2024-11-26 18:02:38.350330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:16:21.682 [2024-11-26 18:02:38.350340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.351962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.351994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:21.682 [2024-11-26 18:02:38.352006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:16:21.682 [2024-11-26 18:02:38.352016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.352212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.352227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:21.682 [2024-11-26 18:02:38.352244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:16:21.682 [2024-11-26 18:02:38.352254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.375732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.375792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:21.682 [2024-11-26 18:02:38.375807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.484 ms 00:16:21.682 [2024-11-26 18:02:38.375818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.382068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:21.682 [2024-11-26 18:02:38.398840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.398896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:21.682 [2024-11-26 18:02:38.398911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.973 ms 00:16:21.682 [2024-11-26 18:02:38.398933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.399036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.399049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:21.682 [2024-11-26 18:02:38.399065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:21.682 [2024-11-26 18:02:38.399076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.399137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.399149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:21.682 [2024-11-26 18:02:38.399159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:21.682 [2024-11-26 18:02:38.399176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.401490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.401523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:21.682 [2024-11-26 18:02:38.401535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:16:21.682 [2024-11-26 18:02:38.401545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.401584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.401614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:21.682 [2024-11-26 18:02:38.401625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.682 [2024-11-26 18:02:38.401635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.401688] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:21.682 [2024-11-26 18:02:38.401705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.401715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:21.682 [2024-11-26 18:02:38.401726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:21.682 [2024-11-26 18:02:38.401735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.405472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.405508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:21.682 [2024-11-26 18:02:38.405521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:16:21.682 [2024-11-26 18:02:38.405531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.405602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.682 [2024-11-26 18:02:38.405615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:21.682 [2024-11-26 18:02:38.405625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:21.682 [2024-11-26 18:02:38.405635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.682 [2024-11-26 18:02:38.406765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.682 [2024-11-26 18:02:38.407761] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.759 ms, result 0 00:16:21.682 [2024-11-26 18:02:38.408546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.682 [2024-11-26 18:02:38.416990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:22.620  [2024-11-26T18:02:40.483Z] Copying: 24/256 [MB] (24 MBps) [2024-11-26T18:02:41.422Z] Copying: 47/256 [MB] (22 MBps) [2024-11-26T18:02:42.803Z] Copying: 70/256 [MB] (23 MBps) [2024-11-26T18:02:43.744Z] Copying: 95/256 [MB] (24 MBps) [2024-11-26T18:02:44.682Z] Copying: 121/256 [MB] (26 MBps) [2024-11-26T18:02:45.619Z] Copying: 148/256 [MB] (27 MBps) [2024-11-26T18:02:46.556Z] Copying: 173/256 [MB] (25 MBps) [2024-11-26T18:02:47.494Z] Copying: 200/256 [MB] (26 MBps) [2024-11-26T18:02:48.431Z] Copying: 225/256 [MB] (25 MBps) [2024-11-26T18:02:48.692Z] Copying: 251/256 [MB] (26 MBps) [2024-11-26T18:02:48.692Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-26 18:02:48.562324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.766 [2024-11-26 18:02:48.563684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.766 [2024-11-26 18:02:48.563711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.766 [2024-11-26 18:02:48.563740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.766 [2024-11-26 18:02:48.563755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.766 [2024-11-26 18:02:48.563777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:31.766 [2024-11-26 18:02:48.564422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.766 [2024-11-26 18:02:48.564439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.766 [2024-11-26 18:02:48.564450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:16:31.766 [2024-11-26 18:02:48.564473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.766 [2024-11-26 18:02:48.566258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.766 [2024-11-26 18:02:48.566298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.766 [2024-11-26 18:02:48.566322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:16:31.766 [2024-11-26 18:02:48.566331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.572831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.573003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.767 [2024-11-26 18:02:48.573026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:16:31.767 [2024-11-26 18:02:48.573037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.578874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.578909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:31.767 [2024-11-26 18:02:48.578920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.793 ms 00:16:31.767 [2024-11-26 18:02:48.578942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.580594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.580628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.767 [2024-11-26 18:02:48.580639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:16:31.767 [2024-11-26 18:02:48.580648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.584278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.584430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.767 [2024-11-26 18:02:48.584451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:16:31.767 [2024-11-26 18:02:48.584486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.584600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.584612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.767 [2024-11-26 18:02:48.584624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:31.767 [2024-11-26 18:02:48.584635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.586744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.586777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:31.767 [2024-11-26 18:02:48.586788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:16:31.767 [2024-11-26 18:02:48.586798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.588220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.588255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:31.767 [2024-11-26 18:02:48.588266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:16:31.767 [2024-11-26 18:02:48.588275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.589511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.589542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.767 [2024-11-26 18:02:48.589553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:16:31.767 [2024-11-26 18:02:48.589562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.590867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.767 [2024-11-26 18:02:48.591005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.767 [2024-11-26 18:02:48.591025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:16:31.767 [2024-11-26 18:02:48.591035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.767 [2024-11-26 18:02:48.591081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.767 [2024-11-26 18:02:48.591098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.767 [2024-11-26 18:02:48.591562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.591997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.768 [2024-11-26 18:02:48.592186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.768 [2024-11-26 18:02:48.592196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:31.768 [2024-11-26 18:02:48.592207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.768 [2024-11-26 18:02:48.592216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.768 [2024-11-26 18:02:48.592225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.768 [2024-11-26 18:02:48.592237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.768 [2024-11-26 18:02:48.592253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.768 [2024-11-26 18:02:48.592264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.768 [2024-11-26 18:02:48.592273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.768 [2024-11-26 18:02:48.592282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.768 [2024-11-26 18:02:48.592290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.768 [2024-11-26 18:02:48.592300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.769 [2024-11-26 18:02:48.592310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.769 [2024-11-26 18:02:48.592320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:16:31.769 [2024-11-26 18:02:48.592336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.594038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.769 [2024-11-26 18:02:48.594174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.769 [2024-11-26 18:02:48.594194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:16:31.769 [2024-11-26 18:02:48.594205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.594275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.769 [2024-11-26 18:02:48.594286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.769 [2024-11-26 18:02:48.594302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:31.769 [2024-11-26 18:02:48.594311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.601151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.601261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.769 [2024-11-26 18:02:48.601329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.601364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.601478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.601518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.769 [2024-11-26 18:02:48.601554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.601583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.601713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.601753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.769 [2024-11-26 18:02:48.601783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.601812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.601852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.601882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.769 [2024-11-26 18:02:48.601962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.602003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.614780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.614969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.769 [2024-11-26 18:02:48.615081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.615130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.619674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.619801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.769 [2024-11-26 18:02:48.619895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.619930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.619991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.620024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.769 [2024-11-26 18:02:48.620054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.620082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.620132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.620214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.769 [2024-11-26 18:02:48.620249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.620278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.620391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.620426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.769 [2024-11-26 18:02:48.620590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.620622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.620691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.620771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.769 [2024-11-26 18:02:48.620806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.620837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.620903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.620982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.769 [2024-11-26 18:02:48.621054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.621096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.621160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.769 [2024-11-26 18:02:48.621204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.769 [2024-11-26 18:02:48.621241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.769 [2024-11-26 18:02:48.621270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.769 [2024-11-26 18:02:48.621428] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.831 ms, result 0 00:16:32.028 00:16:32.028 00:16:32.287 18:02:48 -- ftl/trim.sh@72 -- # svcpid=83684 00:16:32.287 18:02:48 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:32.287 18:02:48 -- ftl/trim.sh@73 -- # waitforlisten 83684 00:16:32.287 18:02:48 -- common/autotest_common.sh@829 -- # '[' -z 83684 ']' 00:16:32.287 18:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.287 18:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.287 18:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.287 18:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.287 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:32.287 [2024-11-26 18:02:49.064106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:32.287 [2024-11-26 18:02:49.064259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83684 ] 00:16:32.544 [2024-11-26 18:02:49.216546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.544 [2024-11-26 18:02:49.263818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.544 [2024-11-26 18:02:49.264011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.112 18:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.112 18:02:49 -- common/autotest_common.sh@862 -- # return 0 00:16:33.112 18:02:49 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:33.373 [2024-11-26 18:02:50.048578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.373 [2024-11-26 18:02:50.048651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.373 [2024-11-26 18:02:50.216118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.216179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.373 [2024-11-26 18:02:50.216198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.373 [2024-11-26 18:02:50.216209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.218611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.218778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.373 [2024-11-26 18:02:50.218805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:16:33.373 [2024-11-26 18:02:50.218817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.218982] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.373 [2024-11-26 18:02:50.219204] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.373 [2024-11-26 18:02:50.219226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.219236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.373 [2024-11-26 18:02:50.219250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:33.373 [2024-11-26 18:02:50.219260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.220749] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.373 [2024-11-26 18:02:50.223258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.223298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.373 [2024-11-26 18:02:50.223311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:16:33.373 [2024-11-26 18:02:50.223324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.223386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.223404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.373 [2024-11-26 18:02:50.223415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:33.373 [2024-11-26 18:02:50.223438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.230049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.230208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.373 [2024-11-26 18:02:50.230230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.555 ms 00:16:33.373 [2024-11-26 18:02:50.230243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.230372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.230392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.373 [2024-11-26 18:02:50.230403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:33.373 [2024-11-26 18:02:50.230417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.230484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.230499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.373 [2024-11-26 18:02:50.230510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:33.373 [2024-11-26 18:02:50.230529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.230564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.373 [2024-11-26 18:02:50.232163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.232191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.373 [2024-11-26 18:02:50.232214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:16:33.373 [2024-11-26 18:02:50.232224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.232273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.373 [2024-11-26 18:02:50.232284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.373 [2024-11-26 18:02:50.232297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.373 [2024-11-26 18:02:50.232307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.373 [2024-11-26 18:02:50.232332] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.373 [2024-11-26 18:02:50.232354] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:33.373 [2024-11-26 18:02:50.232399] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.373 [2024-11-26 18:02:50.232418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:33.373 [2024-11-26 18:02:50.232500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:33.373 [2024-11-26 18:02:50.232515] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.373 [2024-11-26 18:02:50.232530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:33.373 [2024-11-26 18:02:50.232550] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.373 [2024-11-26 18:02:50.232567] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.373 [2024-11-26 18:02:50.232578] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.373 [2024-11-26 18:02:50.232593] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.374 [2024-11-26 18:02:50.232603] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:33.374 [2024-11-26 18:02:50.232616] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:33.374 [2024-11-26 18:02:50.232626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.232638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.374 [2024-11-26 18:02:50.232648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:33.374 [2024-11-26 18:02:50.232660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.232721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.232734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.374 [2024-11-26 18:02:50.232744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:33.374 [2024-11-26 18:02:50.232756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.232828] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.374 [2024-11-26 18:02:50.232842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.374 [2024-11-26 18:02:50.232853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.374 [2024-11-26 18:02:50.232870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.374 [2024-11-26 18:02:50.232882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.374 [2024-11-26 18:02:50.232894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.374 [2024-11-26 18:02:50.232904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.374 [2024-11-26 18:02:50.232916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.374 [2024-11-26 18:02:50.232926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.374 [2024-11-26 18:02:50.232937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.374 [2024-11-26 18:02:50.232947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.374 [2024-11-26 18:02:50.232959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.374 [2024-11-26 18:02:50.232968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.374 [2024-11-26 18:02:50.232980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.374 [2024-11-26 18:02:50.232989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:33.374 [2024-11-26 18:02:50.233000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.374 [2024-11-26 18:02:50.233021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:33.374 [2024-11-26 18:02:50.233030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:33.374 [2024-11-26 18:02:50.233053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:33.374 [2024-11-26 18:02:50.233065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.374 [2024-11-26 18:02:50.233086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.374 [2024-11-26 18:02:50.233115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.374 [2024-11-26 18:02:50.233149] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.374 [2024-11-26 18:02:50.233178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.374 [2024-11-26 18:02:50.233212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.374 [2024-11-26 18:02:50.233234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.374 [2024-11-26 18:02:50.233243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:33.374 [2024-11-26 18:02:50.233255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.374 [2024-11-26 18:02:50.233263] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.374 [2024-11-26 18:02:50.233276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.374 [2024-11-26 18:02:50.233286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.374 [2024-11-26 18:02:50.233307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.374 [2024-11-26 18:02:50.233319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.374 [2024-11-26 18:02:50.233328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.374 [2024-11-26 18:02:50.233340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.374 [2024-11-26 18:02:50.233349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.374 [2024-11-26 18:02:50.233361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.374 [2024-11-26 18:02:50.233370] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.374 [2024-11-26 18:02:50.233395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.374 [2024-11-26 18:02:50.233413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.374 [2024-11-26 18:02:50.233427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:33.374 [2024-11-26 18:02:50.233437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.374 [2024-11-26 18:02:50.233450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:33.374 [2024-11-26 18:02:50.233471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:33.374 [2024-11-26 18:02:50.233484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:33.374 [2024-11-26 18:02:50.233494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:33.374 [2024-11-26 18:02:50.233507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:33.374 [2024-11-26 18:02:50.233517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:33.374 [2024-11-26 18:02:50.233530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:33.374 [2024-11-26 18:02:50.233539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:33.374 [2024-11-26 18:02:50.233553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:33.374 [2024-11-26 18:02:50.233564] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:33.374 [2024-11-26 18:02:50.233576] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.374 [2024-11-26 18:02:50.233594] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.374 [2024-11-26 18:02:50.233609] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.374 [2024-11-26 18:02:50.233620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.374 [2024-11-26 18:02:50.233633] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.374 [2024-11-26 18:02:50.233643] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.374 [2024-11-26 18:02:50.233656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.233666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.374 [2024-11-26 18:02:50.233679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:16:33.374 [2024-11-26 18:02:50.233689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.241935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.241961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.374 [2024-11-26 18:02:50.241978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.207 ms 00:16:33.374 [2024-11-26 18:02:50.241988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.242097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.242110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.374 [2024-11-26 18:02:50.242132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:33.374 [2024-11-26 18:02:50.242156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.254235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.254273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.374 [2024-11-26 18:02:50.254293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.069 ms 00:16:33.374 [2024-11-26 18:02:50.254303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.374 [2024-11-26 18:02:50.254374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.374 [2024-11-26 18:02:50.254386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.374 [2024-11-26 18:02:50.254400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.374 [2024-11-26 18:02:50.254409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.254856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.254873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.375 [2024-11-26 18:02:50.254887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:16:33.375 [2024-11-26 18:02:50.254909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.255021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.255033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.375 [2024-11-26 18:02:50.255047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:33.375 [2024-11-26 18:02:50.255063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.262172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.262209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.375 [2024-11-26 18:02:50.262224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.091 ms 00:16:33.375 [2024-11-26 18:02:50.262237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.264728] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:33.375 [2024-11-26 18:02:50.264878] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.375 [2024-11-26 18:02:50.264903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.264914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.375 [2024-11-26 18:02:50.264929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:16:33.375 [2024-11-26 18:02:50.264938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.277772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.277811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.375 [2024-11-26 18:02:50.277828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.787 ms 00:16:33.375 [2024-11-26 18:02:50.277842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.279839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.279874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.375 [2024-11-26 18:02:50.279892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:16:33.375 [2024-11-26 18:02:50.279903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.281310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.281344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.375 [2024-11-26 18:02:50.281359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:16:33.375 [2024-11-26 18:02:50.281368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.375 [2024-11-26 18:02:50.281571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.375 [2024-11-26 18:02:50.281585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.375 [2024-11-26 18:02:50.281599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:33.375 [2024-11-26 18:02:50.281618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.305003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.305064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.634 [2024-11-26 18:02:50.305085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.392 ms 00:16:33.634 [2024-11-26 18:02:50.305099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.311483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.634 [2024-11-26 18:02:50.327953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.328014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.634 [2024-11-26 18:02:50.328031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.785 ms 00:16:33.634 [2024-11-26 18:02:50.328061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.328175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.328190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:33.634 [2024-11-26 18:02:50.328202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:33.634 [2024-11-26 18:02:50.328214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.328273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.328288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.634 [2024-11-26 18:02:50.328299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:33.634 [2024-11-26 18:02:50.328311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.330479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.330517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:33.634 [2024-11-26 18:02:50.330529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:16:33.634 [2024-11-26 18:02:50.330552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.330589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.330606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.634 [2024-11-26 18:02:50.330617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:33.634 [2024-11-26 18:02:50.330630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.330668] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:33.634 [2024-11-26 18:02:50.330682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.330692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:33.634 [2024-11-26 18:02:50.330708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:33.634 [2024-11-26 18:02:50.330718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.334379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.334418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.634 [2024-11-26 18:02:50.334435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:16:33.634 [2024-11-26 18:02:50.334446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.334535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.334548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.634 [2024-11-26 18:02:50.334562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:33.634 [2024-11-26 18:02:50.334575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.335535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.634 [2024-11-26 18:02:50.336610] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.322 ms, result 0 00:16:33.634 [2024-11-26 18:02:50.337763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.634 Some configs were skipped because the RPC state that can call them passed over. 00:16:33.634 18:02:50 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:33.634 [2024-11-26 18:02:50.549681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.634 [2024-11-26 18:02:50.549895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:33.634 [2024-11-26 18:02:50.549978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:16:33.634 [2024-11-26 18:02:50.550028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.634 [2024-11-26 18:02:50.550095] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.268 ms, result 0 00:16:33.634 true 00:16:33.894 18:02:50 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:33.894 [2024-11-26 18:02:50.752946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.894 [2024-11-26 18:02:50.753001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:33.894 [2024-11-26 18:02:50.753020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:16:33.894 [2024-11-26 18:02:50.753031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.894 [2024-11-26 18:02:50.753071] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.337 ms, result 0 00:16:33.894 true 00:16:33.894 18:02:50 -- ftl/trim.sh@81 -- # killprocess 83684 00:16:33.894 18:02:50 -- common/autotest_common.sh@936 -- # '[' -z 83684 ']' 00:16:33.894 18:02:50 -- common/autotest_common.sh@940 -- # kill -0 83684 00:16:33.894 18:02:50 -- common/autotest_common.sh@941 -- # uname 00:16:33.894 18:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.894 18:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83684 00:16:34.155 killing process with pid 83684 00:16:34.155 18:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.155 18:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.155 18:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83684' 00:16:34.155 18:02:50 -- common/autotest_common.sh@955 -- # kill 83684 00:16:34.155 18:02:50 -- common/autotest_common.sh@960 -- # wait 83684 00:16:34.155 [2024-11-26 18:02:50.968917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.968982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:34.155 [2024-11-26 18:02:50.968999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.155 [2024-11-26 18:02:50.969031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.969056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:34.155 [2024-11-26 18:02:50.969725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.969741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:34.155 [2024-11-26 18:02:50.969754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:16:34.155 [2024-11-26 18:02:50.969764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.970043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.970063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:34.155 [2024-11-26 18:02:50.970076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:34.155 [2024-11-26 18:02:50.970094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.973442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.973490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:34.155 [2024-11-26 18:02:50.973506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:16:34.155 [2024-11-26 18:02:50.973516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.979254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.979291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:34.155 [2024-11-26 18:02:50.979311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:16:34.155 [2024-11-26 18:02:50.979321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.980919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.980953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:34.155 [2024-11-26 18:02:50.980968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:16:34.155 [2024-11-26 18:02:50.980977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.984750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.984786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:34.155 [2024-11-26 18:02:50.984809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:16:34.155 [2024-11-26 18:02:50.984825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.984942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.155 [2024-11-26 18:02:50.984954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:34.155 [2024-11-26 18:02:50.984967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:34.155 [2024-11-26 18:02:50.984977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.155 [2024-11-26 18:02:50.987192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.156 [2024-11-26 18:02:50.987342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:34.156 [2024-11-26 18:02:50.987432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:16:34.156 [2024-11-26 18:02:50.987498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.156 [2024-11-26 18:02:50.988999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.156 [2024-11-26 18:02:50.989121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:34.156 [2024-11-26 18:02:50.989193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:16:34.156 [2024-11-26 18:02:50.989227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.156 [2024-11-26 18:02:50.990386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.156 [2024-11-26 18:02:50.990519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:34.156 [2024-11-26 18:02:50.990594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:16:34.156 [2024-11-26 18:02:50.990628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.156 [2024-11-26 18:02:50.991795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.156 [2024-11-26 18:02:50.991915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:34.156 [2024-11-26 18:02:50.991937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:16:34.156 [2024-11-26 18:02:50.991947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.156 [2024-11-26 18:02:50.991984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:34.156 [2024-11-26 18:02:50.992000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.992993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:34.156 [2024-11-26 18:02:50.993254] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:34.156 [2024-11-26 18:02:50.993266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:34.156 [2024-11-26 18:02:50.993278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:34.156 [2024-11-26 18:02:50.993303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:34.156 [2024-11-26 18:02:50.993313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:34.156 [2024-11-26 18:02:50.993326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:34.157 [2024-11-26 18:02:50.993336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:34.157 [2024-11-26 18:02:50.993348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:34.157 [2024-11-26 18:02:50.993358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:34.157 [2024-11-26 18:02:50.993370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:34.157 [2024-11-26 18:02:50.993379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:34.157 [2024-11-26 18:02:50.993391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.157 [2024-11-26 18:02:50.993405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:34.157 [2024-11-26 18:02:50.993421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:16:34.157 [2024-11-26 18:02:50.993431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:50.995136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.157 [2024-11-26 18:02:50.995158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:34.157 [2024-11-26 18:02:50.995172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:16:34.157 [2024-11-26 18:02:50.995183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:50.995263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.157 [2024-11-26 18:02:50.995275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:34.157 [2024-11-26 18:02:50.995288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:34.157 [2024-11-26 18:02:50.995297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.002520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.002645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.157 [2024-11-26 18:02:51.002720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.002756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.002864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.002900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.157 [2024-11-26 18:02:51.002935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.002965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.003109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.003150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.157 [2024-11-26 18:02:51.003183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.003213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.003266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.003399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.157 [2024-11-26 18:02:51.003432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.003481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.016885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.017081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.157 [2024-11-26 18:02:51.017192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.017239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.021791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.021916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.157 [2024-11-26 18:02:51.022055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.022092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.022166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.022202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.157 [2024-11-26 18:02:51.022236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.022323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.022390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.022422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.157 [2024-11-26 18:02:51.022469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.022508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.022622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.022960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.157 [2024-11-26 18:02:51.023051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.023085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.023166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.023202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:34.157 [2024-11-26 18:02:51.023289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.023326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.023393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.023424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.157 [2024-11-26 18:02:51.023486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.023595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.023668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.157 [2024-11-26 18:02:51.023702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.157 [2024-11-26 18:02:51.023738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.157 [2024-11-26 18:02:51.023821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.157 [2024-11-26 18:02:51.024033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.180 ms, result 0 00:16:34.415 18:02:51 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:34.415 18:02:51 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:34.673 [2024-11-26 18:02:51.355690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:34.673 [2024-11-26 18:02:51.355969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83725 ] 00:16:34.673 [2024-11-26 18:02:51.491884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.673 [2024-11-26 18:02:51.532628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.932 [2024-11-26 18:02:51.633652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.932 [2024-11-26 18:02:51.633936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.932 [2024-11-26 18:02:51.786303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.932 [2024-11-26 18:02:51.786542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.932 [2024-11-26 18:02:51.786574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:34.932 [2024-11-26 18:02:51.786585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.932 [2024-11-26 18:02:51.789132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.932 [2024-11-26 18:02:51.789296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.932 [2024-11-26 18:02:51.789328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:16:34.932 [2024-11-26 18:02:51.789339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.932 [2024-11-26 18:02:51.789479] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.932 [2024-11-26 18:02:51.789743] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.932 [2024-11-26 18:02:51.789769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.932 [2024-11-26 18:02:51.789780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.932 [2024-11-26 18:02:51.789799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:16:34.932 [2024-11-26 18:02:51.789809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.932 [2024-11-26 18:02:51.791289] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:34.932 [2024-11-26 18:02:51.793761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.932 [2024-11-26 18:02:51.793795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:34.933 [2024-11-26 18:02:51.793809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:16:34.933 [2024-11-26 18:02:51.793819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.793885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.793898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:34.933 [2024-11-26 18:02:51.793909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:34.933 [2024-11-26 18:02:51.793919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.800742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.800772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.933 [2024-11-26 18:02:51.800784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.761 ms 00:16:34.933 [2024-11-26 18:02:51.800794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.800894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.800909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.933 [2024-11-26 18:02:51.800923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:34.933 [2024-11-26 18:02:51.800940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.800969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.800979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.933 [2024-11-26 18:02:51.800990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:34.933 [2024-11-26 18:02:51.801003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.801032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:34.933 [2024-11-26 18:02:51.802714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.802754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.933 [2024-11-26 18:02:51.802766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:16:34.933 [2024-11-26 18:02:51.802784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.802838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.802850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.933 [2024-11-26 18:02:51.802861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:34.933 [2024-11-26 18:02:51.802871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.802900] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:34.933 [2024-11-26 18:02:51.802930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:34.933 [2024-11-26 18:02:51.802976] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:34.933 [2024-11-26 18:02:51.803000] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:34.933 [2024-11-26 18:02:51.803074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:34.933 [2024-11-26 18:02:51.803088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.933 [2024-11-26 18:02:51.803101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:34.933 [2024-11-26 18:02:51.803122] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803134] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803145] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:34.933 [2024-11-26 18:02:51.803165] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.933 [2024-11-26 18:02:51.803179] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:34.933 [2024-11-26 18:02:51.803189] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:34.933 [2024-11-26 18:02:51.803200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.803210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.933 [2024-11-26 18:02:51.803220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:16:34.933 [2024-11-26 18:02:51.803238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.803303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.933 [2024-11-26 18:02:51.803315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.933 [2024-11-26 18:02:51.803333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:34.933 [2024-11-26 18:02:51.803346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.933 [2024-11-26 18:02:51.803433] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.933 [2024-11-26 18:02:51.803448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.933 [2024-11-26 18:02:51.803486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.933 [2024-11-26 18:02:51.803517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.933 [2024-11-26 18:02:51.803547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.933 [2024-11-26 18:02:51.803585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.933 [2024-11-26 18:02:51.803595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:34.933 [2024-11-26 18:02:51.803605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.933 [2024-11-26 18:02:51.803614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.933 [2024-11-26 18:02:51.803624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:34.933 [2024-11-26 18:02:51.803638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.933 [2024-11-26 18:02:51.803658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:34.933 [2024-11-26 18:02:51.803668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:34.933 [2024-11-26 18:02:51.803686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:34.933 [2024-11-26 18:02:51.803696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.933 [2024-11-26 18:02:51.803716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.933 [2024-11-26 18:02:51.803745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.933 [2024-11-26 18:02:51.803773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.933 [2024-11-26 18:02:51.803806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.933 [2024-11-26 18:02:51.803835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.933 [2024-11-26 18:02:51.803854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.933 [2024-11-26 18:02:51.803863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:34.933 [2024-11-26 18:02:51.803872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.933 [2024-11-26 18:02:51.803881] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.933 [2024-11-26 18:02:51.803891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.933 [2024-11-26 18:02:51.803901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.933 [2024-11-26 18:02:51.803921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.933 [2024-11-26 18:02:51.803930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.933 [2024-11-26 18:02:51.803939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.933 [2024-11-26 18:02:51.803952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.933 [2024-11-26 18:02:51.803962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.933 [2024-11-26 18:02:51.803971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.933 [2024-11-26 18:02:51.803981] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.933 [2024-11-26 18:02:51.803997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.933 [2024-11-26 18:02:51.804018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:34.933 [2024-11-26 18:02:51.804029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:34.933 [2024-11-26 18:02:51.804040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:34.933 [2024-11-26 18:02:51.804051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:34.933 [2024-11-26 18:02:51.804062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:34.934 [2024-11-26 18:02:51.804072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:34.934 [2024-11-26 18:02:51.804082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:34.934 [2024-11-26 18:02:51.804093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:34.934 [2024-11-26 18:02:51.804103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:34.934 [2024-11-26 18:02:51.804114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:34.934 [2024-11-26 18:02:51.804124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:34.934 [2024-11-26 18:02:51.804138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:34.934 [2024-11-26 18:02:51.804149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:34.934 [2024-11-26 18:02:51.804160] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.934 [2024-11-26 18:02:51.804171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.934 [2024-11-26 18:02:51.804183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.934 [2024-11-26 18:02:51.804193] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.934 [2024-11-26 18:02:51.804204] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.934 [2024-11-26 18:02:51.804214] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.934 [2024-11-26 18:02:51.804225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.804245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.934 [2024-11-26 18:02:51.804255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:16:34.934 [2024-11-26 18:02:51.804265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.812646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.812679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.934 [2024-11-26 18:02:51.812693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.347 ms 00:16:34.934 [2024-11-26 18:02:51.812707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.812827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.812839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:34.934 [2024-11-26 18:02:51.812851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:34.934 [2024-11-26 18:02:51.812860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.834280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.834485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.934 [2024-11-26 18:02:51.834524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.423 ms 00:16:34.934 [2024-11-26 18:02:51.834547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.834638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.834664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.934 [2024-11-26 18:02:51.834678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.934 [2024-11-26 18:02:51.834691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.835153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.835169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.934 [2024-11-26 18:02:51.835182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:16:34.934 [2024-11-26 18:02:51.835199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.835350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.835366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.934 [2024-11-26 18:02:51.835379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:34.934 [2024-11-26 18:02:51.835392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.842976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.843011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.934 [2024-11-26 18:02:51.843024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.564 ms 00:16:34.934 [2024-11-26 18:02:51.843043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.934 [2024-11-26 18:02:51.845628] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:34.934 [2024-11-26 18:02:51.845771] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:34.934 [2024-11-26 18:02:51.845791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.934 [2024-11-26 18:02:51.845802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:34.934 [2024-11-26 18:02:51.845813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:16:34.934 [2024-11-26 18:02:51.845824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.858327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.858379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:35.193 [2024-11-26 18:02:51.858392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:16:35.193 [2024-11-26 18:02:51.858424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.860199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.860232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:35.193 [2024-11-26 18:02:51.860244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:16:35.193 [2024-11-26 18:02:51.860253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.861841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.861871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:35.193 [2024-11-26 18:02:51.861883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:16:35.193 [2024-11-26 18:02:51.861892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.862083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.862098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.193 [2024-11-26 18:02:51.862109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:35.193 [2024-11-26 18:02:51.862118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.884701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.884760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:35.193 [2024-11-26 18:02:51.884775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.593 ms 00:16:35.193 [2024-11-26 18:02:51.884807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.891205] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:35.193 [2024-11-26 18:02:51.907343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.193 [2024-11-26 18:02:51.907580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:35.193 [2024-11-26 18:02:51.907606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.485 ms 00:16:35.193 [2024-11-26 18:02:51.907617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.193 [2024-11-26 18:02:51.907712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.907728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:35.194 [2024-11-26 18:02:51.907740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:35.194 [2024-11-26 18:02:51.907750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.907799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.907819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.194 [2024-11-26 18:02:51.907829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:35.194 [2024-11-26 18:02:51.907839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.910012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.910047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:35.194 [2024-11-26 18:02:51.910057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:16:35.194 [2024-11-26 18:02:51.910067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.910106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.910116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.194 [2024-11-26 18:02:51.910131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:35.194 [2024-11-26 18:02:51.910157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.910194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:35.194 [2024-11-26 18:02:51.910206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.910215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:35.194 [2024-11-26 18:02:51.910229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:35.194 [2024-11-26 18:02:51.910238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.913886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.913921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.194 [2024-11-26 18:02:51.913934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:16:35.194 [2024-11-26 18:02:51.913944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.914015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.194 [2024-11-26 18:02:51.914027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.194 [2024-11-26 18:02:51.914037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:35.194 [2024-11-26 18:02:51.914053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.194 [2024-11-26 18:02:51.914971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.194 [2024-11-26 18:02:51.915895] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.621 ms, result 0 00:16:35.194 [2024-11-26 18:02:51.916635] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.194 [2024-11-26 18:02:51.924957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:36.129  [2024-11-26T18:02:53.994Z] Copying: 30/256 [MB] (30 MBps) [2024-11-26T18:02:54.931Z] Copying: 55/256 [MB] (25 MBps) [2024-11-26T18:02:56.329Z] Copying: 81/256 [MB] (25 MBps) [2024-11-26T18:02:57.265Z] Copying: 106/256 [MB] (24 MBps) [2024-11-26T18:02:58.201Z] Copying: 130/256 [MB] (24 MBps) [2024-11-26T18:02:59.138Z] Copying: 154/256 [MB] (23 MBps) [2024-11-26T18:03:00.072Z] Copying: 179/256 [MB] (24 MBps) [2024-11-26T18:03:01.009Z] Copying: 203/256 [MB] (24 MBps) [2024-11-26T18:03:01.948Z] Copying: 228/256 [MB] (24 MBps) [2024-11-26T18:03:02.208Z] Copying: 252/256 [MB] (24 MBps) [2024-11-26T18:03:02.208Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-26 18:03:02.046744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.282 [2024-11-26 18:03:02.048194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.048219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:45.283 [2024-11-26 18:03:02.048240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:45.283 [2024-11-26 18:03:02.048251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.048273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:45.283 [2024-11-26 18:03:02.048936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.048955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:45.283 [2024-11-26 18:03:02.048966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:16:45.283 [2024-11-26 18:03:02.048976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.049238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.049251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:45.283 [2024-11-26 18:03:02.049262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:16:45.283 [2024-11-26 18:03:02.049273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.052232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.052257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:45.283 [2024-11-26 18:03:02.052268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:16:45.283 [2024-11-26 18:03:02.052278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.057904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.058070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:45.283 [2024-11-26 18:03:02.058093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.605 ms 00:16:45.283 [2024-11-26 18:03:02.058104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.059378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.059417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:45.283 [2024-11-26 18:03:02.059429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:16:45.283 [2024-11-26 18:03:02.059440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.063627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.063663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:45.283 [2024-11-26 18:03:02.063687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:16:45.283 [2024-11-26 18:03:02.063698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.063810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.063831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:45.283 [2024-11-26 18:03:02.063842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:45.283 [2024-11-26 18:03:02.063852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.065754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.065788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:45.283 [2024-11-26 18:03:02.065800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.882 ms 00:16:45.283 [2024-11-26 18:03:02.065809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.067178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.067214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:45.283 [2024-11-26 18:03:02.067227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:16:45.283 [2024-11-26 18:03:02.067237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.068402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.068563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:45.283 [2024-11-26 18:03:02.068584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:16:45.283 [2024-11-26 18:03:02.068594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.069737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.283 [2024-11-26 18:03:02.069767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:45.283 [2024-11-26 18:03:02.069778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:16:45.283 [2024-11-26 18:03:02.069788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.283 [2024-11-26 18:03:02.069817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:45.283 [2024-11-26 18:03:02.069833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.069997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:45.283 [2024-11-26 18:03:02.070368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:45.284 [2024-11-26 18:03:02.070959] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:45.284 [2024-11-26 18:03:02.070969] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:45.284 [2024-11-26 18:03:02.070980] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:45.284 [2024-11-26 18:03:02.070990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:45.284 [2024-11-26 18:03:02.071000] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:45.284 [2024-11-26 18:03:02.071010] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:45.284 [2024-11-26 18:03:02.071029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:45.284 [2024-11-26 18:03:02.071039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:45.284 [2024-11-26 18:03:02.071049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:45.284 [2024-11-26 18:03:02.071058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:45.284 [2024-11-26 18:03:02.071068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:45.284 [2024-11-26 18:03:02.071077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.284 [2024-11-26 18:03:02.071092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:45.284 [2024-11-26 18:03:02.071102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:16:45.284 [2024-11-26 18:03:02.071113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.072864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.284 [2024-11-26 18:03:02.072882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:45.284 [2024-11-26 18:03:02.072893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:16:45.284 [2024-11-26 18:03:02.072903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.072976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.284 [2024-11-26 18:03:02.072987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:45.284 [2024-11-26 18:03:02.072998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:45.284 [2024-11-26 18:03:02.073008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.079894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.080032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.284 [2024-11-26 18:03:02.080054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.284 [2024-11-26 18:03:02.080065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.080156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.080170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.284 [2024-11-26 18:03:02.080180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.284 [2024-11-26 18:03:02.080190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.080240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.080261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.284 [2024-11-26 18:03:02.080272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.284 [2024-11-26 18:03:02.080282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.080302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.080316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.284 [2024-11-26 18:03:02.080327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.284 [2024-11-26 18:03:02.080337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.093027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.093075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.284 [2024-11-26 18:03:02.093089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.284 [2024-11-26 18:03:02.093100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.284 [2024-11-26 18:03:02.097687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.284 [2024-11-26 18:03:02.097719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.285 [2024-11-26 18:03:02.097732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.097743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.097780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.097792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.285 [2024-11-26 18:03:02.097804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.097814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.097845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.097856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.285 [2024-11-26 18:03:02.097872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.097882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.097961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.097974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.285 [2024-11-26 18:03:02.097985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.097995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.098035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.098048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:45.285 [2024-11-26 18:03:02.098058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.098073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.098113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.098125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.285 [2024-11-26 18:03:02.098135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.098156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.098214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:45.285 [2024-11-26 18:03:02.098226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.285 [2024-11-26 18:03:02.098247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:45.285 [2024-11-26 18:03:02.098257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.285 [2024-11-26 18:03:02.098404] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.266 ms, result 0 00:16:45.543 00:16:45.543 00:16:45.543 18:03:02 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:45.543 18:03:02 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:46.109 18:03:02 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:46.109 [2024-11-26 18:03:02.883746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:46.109 [2024-11-26 18:03:02.883853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83852 ] 00:16:46.368 [2024-11-26 18:03:03.034035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.368 [2024-11-26 18:03:03.075064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.368 [2024-11-26 18:03:03.175953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.368 [2024-11-26 18:03:03.176041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.628 [2024-11-26 18:03:03.327443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.327512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:46.629 [2024-11-26 18:03:03.327532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:46.629 [2024-11-26 18:03:03.327543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.329971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.330157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.629 [2024-11-26 18:03:03.330181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:16:46.629 [2024-11-26 18:03:03.330192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.330420] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:46.629 [2024-11-26 18:03:03.330679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:46.629 [2024-11-26 18:03:03.330705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.330723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.629 [2024-11-26 18:03:03.330735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:16:46.629 [2024-11-26 18:03:03.330745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.332209] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:46.629 [2024-11-26 18:03:03.334787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.334823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:46.629 [2024-11-26 18:03:03.334837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:16:46.629 [2024-11-26 18:03:03.334848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.334916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.334930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:46.629 [2024-11-26 18:03:03.334942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:46.629 [2024-11-26 18:03:03.334952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.341605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.341633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.629 [2024-11-26 18:03:03.341646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.618 ms 00:16:46.629 [2024-11-26 18:03:03.341665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.341772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.341792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.629 [2024-11-26 18:03:03.341807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:46.629 [2024-11-26 18:03:03.341817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.341846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.341857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:46.629 [2024-11-26 18:03:03.341867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:46.629 [2024-11-26 18:03:03.341888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.341916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:46.629 [2024-11-26 18:03:03.343577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.343599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.629 [2024-11-26 18:03:03.343615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:16:46.629 [2024-11-26 18:03:03.343625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.343684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.343699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:46.629 [2024-11-26 18:03:03.343710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:46.629 [2024-11-26 18:03:03.343720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.343741] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:46.629 [2024-11-26 18:03:03.343764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:46.629 [2024-11-26 18:03:03.343804] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:46.629 [2024-11-26 18:03:03.343831] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:46.629 [2024-11-26 18:03:03.343915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:46.629 [2024-11-26 18:03:03.343928] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:46.629 [2024-11-26 18:03:03.343942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:46.629 [2024-11-26 18:03:03.343956] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:46.629 [2024-11-26 18:03:03.343968] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:46.629 [2024-11-26 18:03:03.343980] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:46.629 [2024-11-26 18:03:03.343990] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:46.629 [2024-11-26 18:03:03.344000] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:46.629 [2024-11-26 18:03:03.344014] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:46.629 [2024-11-26 18:03:03.344024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.344035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:46.629 [2024-11-26 18:03:03.344052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:46.629 [2024-11-26 18:03:03.344062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.344129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.629 [2024-11-26 18:03:03.344147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:46.629 [2024-11-26 18:03:03.344165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:46.629 [2024-11-26 18:03:03.344178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.629 [2024-11-26 18:03:03.344253] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:46.629 [2024-11-26 18:03:03.344273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:46.629 [2024-11-26 18:03:03.344284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:46.629 [2024-11-26 18:03:03.344318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:46.629 [2024-11-26 18:03:03.344349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.629 [2024-11-26 18:03:03.344368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:46.629 [2024-11-26 18:03:03.344378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:46.629 [2024-11-26 18:03:03.344388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.629 [2024-11-26 18:03:03.344397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:46.629 [2024-11-26 18:03:03.344407] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:46.629 [2024-11-26 18:03:03.344420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344430] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:46.629 [2024-11-26 18:03:03.344440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:46.629 [2024-11-26 18:03:03.344450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:46.629 [2024-11-26 18:03:03.344504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:46.629 [2024-11-26 18:03:03.344514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:46.629 [2024-11-26 18:03:03.344533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:46.629 [2024-11-26 18:03:03.344566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:46.629 [2024-11-26 18:03:03.344595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:46.629 [2024-11-26 18:03:03.344629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:46.629 [2024-11-26 18:03:03.344657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.629 [2024-11-26 18:03:03.344677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:46.629 [2024-11-26 18:03:03.344687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:46.629 [2024-11-26 18:03:03.344697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.629 [2024-11-26 18:03:03.344706] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:46.629 [2024-11-26 18:03:03.344716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:46.629 [2024-11-26 18:03:03.344726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.629 [2024-11-26 18:03:03.344747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:46.629 [2024-11-26 18:03:03.344756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:46.629 [2024-11-26 18:03:03.344766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:46.629 [2024-11-26 18:03:03.344778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:46.629 [2024-11-26 18:03:03.344787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:46.629 [2024-11-26 18:03:03.344797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:46.629 [2024-11-26 18:03:03.344807] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:46.629 [2024-11-26 18:03:03.344820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.629 [2024-11-26 18:03:03.344835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:46.629 [2024-11-26 18:03:03.344845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:46.629 [2024-11-26 18:03:03.344856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:46.629 [2024-11-26 18:03:03.344867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:46.629 [2024-11-26 18:03:03.344877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:46.629 [2024-11-26 18:03:03.344888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:46.629 [2024-11-26 18:03:03.344898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:46.629 [2024-11-26 18:03:03.344909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:46.629 [2024-11-26 18:03:03.344919] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:46.629 [2024-11-26 18:03:03.344930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:46.629 [2024-11-26 18:03:03.344940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:46.629 [2024-11-26 18:03:03.344954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:46.629 [2024-11-26 18:03:03.344965] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:46.629 [2024-11-26 18:03:03.344975] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:46.629 [2024-11-26 18:03:03.344987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.630 [2024-11-26 18:03:03.344998] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:46.630 [2024-11-26 18:03:03.345010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:46.630 [2024-11-26 18:03:03.345021] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:46.630 [2024-11-26 18:03:03.345031] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:46.630 [2024-11-26 18:03:03.345042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.345062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:46.630 [2024-11-26 18:03:03.345073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:16:46.630 [2024-11-26 18:03:03.345084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.353465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.353493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.630 [2024-11-26 18:03:03.353506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.335 ms 00:16:46.630 [2024-11-26 18:03:03.353521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.353631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.353644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:46.630 [2024-11-26 18:03:03.353655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:46.630 [2024-11-26 18:03:03.353666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.373940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.373987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.630 [2024-11-26 18:03:03.374019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.275 ms 00:16:46.630 [2024-11-26 18:03:03.374045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.374132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.374168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.630 [2024-11-26 18:03:03.374184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:46.630 [2024-11-26 18:03:03.374197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.374675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.374692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.630 [2024-11-26 18:03:03.374707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:16:46.630 [2024-11-26 18:03:03.374720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.374863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.374886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.630 [2024-11-26 18:03:03.374900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:46.630 [2024-11-26 18:03:03.374912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.382056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.382092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.630 [2024-11-26 18:03:03.382106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.122 ms 00:16:46.630 [2024-11-26 18:03:03.382116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.384726] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:46.630 [2024-11-26 18:03:03.384881] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:46.630 [2024-11-26 18:03:03.384912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.384924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:46.630 [2024-11-26 18:03:03.384935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:16:46.630 [2024-11-26 18:03:03.384945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.397963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.398017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:46.630 [2024-11-26 18:03:03.398032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.928 ms 00:16:46.630 [2024-11-26 18:03:03.398042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.399841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.399975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:46.630 [2024-11-26 18:03:03.399994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:16:46.630 [2024-11-26 18:03:03.400004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.401413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.401446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:46.630 [2024-11-26 18:03:03.401471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:16:46.630 [2024-11-26 18:03:03.401481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.401688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.401705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:46.630 [2024-11-26 18:03:03.401716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:46.630 [2024-11-26 18:03:03.401726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.424299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.424358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:46.630 [2024-11-26 18:03:03.424387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:16:46.630 [2024-11-26 18:03:03.424398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.430688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:46.630 [2024-11-26 18:03:03.447266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.447316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:46.630 [2024-11-26 18:03:03.447333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:16:46.630 [2024-11-26 18:03:03.447358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.447494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.447513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:46.630 [2024-11-26 18:03:03.447525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:46.630 [2024-11-26 18:03:03.447536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.447592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.447603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:46.630 [2024-11-26 18:03:03.447614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:46.630 [2024-11-26 18:03:03.447632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.449673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.449813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:46.630 [2024-11-26 18:03:03.449839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:16:46.630 [2024-11-26 18:03:03.449850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.449913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.449925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:46.630 [2024-11-26 18:03:03.449942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:46.630 [2024-11-26 18:03:03.449956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.449992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:46.630 [2024-11-26 18:03:03.450004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.450013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:46.630 [2024-11-26 18:03:03.450023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:46.630 [2024-11-26 18:03:03.450044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.453796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.453831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:46.630 [2024-11-26 18:03:03.453844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:16:46.630 [2024-11-26 18:03:03.453864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.453936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.630 [2024-11-26 18:03:03.453947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:46.630 [2024-11-26 18:03:03.453959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:46.630 [2024-11-26 18:03:03.453968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.630 [2024-11-26 18:03:03.454933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.630 [2024-11-26 18:03:03.455871] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.430 ms, result 0 00:16:46.630 [2024-11-26 18:03:03.456540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.630 [2024-11-26 18:03:03.464962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.890  [2024-11-26T18:03:03.816Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-11-26 18:03:03.626951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:46.890 [2024-11-26 18:03:03.628243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.628401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:46.890 [2024-11-26 18:03:03.628432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:46.890 [2024-11-26 18:03:03.628445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.628487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:46.890 [2024-11-26 18:03:03.629141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.629154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.890 [2024-11-26 18:03:03.629165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:16:46.890 [2024-11-26 18:03:03.629176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.630984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.631024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.890 [2024-11-26 18:03:03.631036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:16:46.890 [2024-11-26 18:03:03.631046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.634348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.634477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.890 [2024-11-26 18:03:03.634553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:16:46.890 [2024-11-26 18:03:03.634589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.640340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.640475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:46.890 [2024-11-26 18:03:03.640554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.687 ms 00:16:46.890 [2024-11-26 18:03:03.640589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.642097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.642232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.890 [2024-11-26 18:03:03.642251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:16:46.890 [2024-11-26 18:03:03.642261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.646071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.646202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.890 [2024-11-26 18:03:03.646223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:16:46.890 [2024-11-26 18:03:03.646244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.890 [2024-11-26 18:03:03.646357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.890 [2024-11-26 18:03:03.646369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.890 [2024-11-26 18:03:03.646380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:46.890 [2024-11-26 18:03:03.646390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.891 [2024-11-26 18:03:03.648370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.891 [2024-11-26 18:03:03.648406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:46.891 [2024-11-26 18:03:03.648418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:16:46.891 [2024-11-26 18:03:03.648427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.891 [2024-11-26 18:03:03.649961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.891 [2024-11-26 18:03:03.649990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:46.891 [2024-11-26 18:03:03.650001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:16:46.891 [2024-11-26 18:03:03.650010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.891 [2024-11-26 18:03:03.651124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.891 [2024-11-26 18:03:03.651154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:46.891 [2024-11-26 18:03:03.651165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:16:46.891 [2024-11-26 18:03:03.651174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.891 [2024-11-26 18:03:03.652256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.891 [2024-11-26 18:03:03.652291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:46.891 [2024-11-26 18:03:03.652302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:16:46.891 [2024-11-26 18:03:03.652311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.891 [2024-11-26 18:03:03.652338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:46.891 [2024-11-26 18:03:03.652359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:46.891 [2024-11-26 18:03:03.652871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.652992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:46.892 [2024-11-26 18:03:03.653420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:46.892 [2024-11-26 18:03:03.653430] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:46.892 [2024-11-26 18:03:03.653441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:46.892 [2024-11-26 18:03:03.653450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:46.892 [2024-11-26 18:03:03.653471] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:46.892 [2024-11-26 18:03:03.653481] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:46.892 [2024-11-26 18:03:03.653490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:46.892 [2024-11-26 18:03:03.653500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:46.892 [2024-11-26 18:03:03.653510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:46.892 [2024-11-26 18:03:03.653519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:46.892 [2024-11-26 18:03:03.653528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:46.892 [2024-11-26 18:03:03.653537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.893 [2024-11-26 18:03:03.653547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:46.893 [2024-11-26 18:03:03.653562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:16:46.893 [2024-11-26 18:03:03.653572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.655300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.893 [2024-11-26 18:03:03.655323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:46.893 [2024-11-26 18:03:03.655335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:16:46.893 [2024-11-26 18:03:03.655345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.655418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.893 [2024-11-26 18:03:03.655434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:46.893 [2024-11-26 18:03:03.655445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:46.893 [2024-11-26 18:03:03.655465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.662210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.662236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.893 [2024-11-26 18:03:03.662248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.662260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.662340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.662356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.893 [2024-11-26 18:03:03.662367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.662378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.662425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.662437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.893 [2024-11-26 18:03:03.662448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.662475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.662494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.662505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.893 [2024-11-26 18:03:03.662519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.662529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.675141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.675191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.893 [2024-11-26 18:03:03.675215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.675225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.679744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.679782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.893 [2024-11-26 18:03:03.679794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.679804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.679840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.679852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.893 [2024-11-26 18:03:03.679863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.679872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.679902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.679912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.893 [2024-11-26 18:03:03.679922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.679936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.680019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.680032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.893 [2024-11-26 18:03:03.680043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.680053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.680103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.680116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:46.893 [2024-11-26 18:03:03.680126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.680135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.680183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.680195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.893 [2024-11-26 18:03:03.680212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.680221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.680281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.893 [2024-11-26 18:03:03.680293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.893 [2024-11-26 18:03:03.680304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.893 [2024-11-26 18:03:03.680317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.893 [2024-11-26 18:03:03.680447] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.281 ms, result 0 00:16:47.151 00:16:47.151 00:16:47.151 18:03:03 -- ftl/trim.sh@93 -- # svcpid=83866 00:16:47.151 18:03:03 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:47.151 18:03:03 -- ftl/trim.sh@94 -- # waitforlisten 83866 00:16:47.151 18:03:03 -- common/autotest_common.sh@829 -- # '[' -z 83866 ']' 00:16:47.151 18:03:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.151 18:03:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.151 18:03:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.151 18:03:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.151 18:03:03 -- common/autotest_common.sh@10 -- # set +x 00:16:47.151 [2024-11-26 18:03:04.030763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:47.151 [2024-11-26 18:03:04.030884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83866 ] 00:16:47.408 [2024-11-26 18:03:04.182665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.408 [2024-11-26 18:03:04.225171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:47.408 [2024-11-26 18:03:04.225365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.973 18:03:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.973 18:03:04 -- common/autotest_common.sh@862 -- # return 0 00:16:47.973 18:03:04 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:48.231 [2024-11-26 18:03:05.052574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.231 [2024-11-26 18:03:05.052635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.531 [2024-11-26 18:03:05.220540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.220593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.531 [2024-11-26 18:03:05.220611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:48.531 [2024-11-26 18:03:05.220622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.222967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.223008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.531 [2024-11-26 18:03:05.223023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:16:48.531 [2024-11-26 18:03:05.223033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.223114] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.531 [2024-11-26 18:03:05.223338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.531 [2024-11-26 18:03:05.223359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.223370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.531 [2024-11-26 18:03:05.223383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:16:48.531 [2024-11-26 18:03:05.223393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.225006] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:48.531 [2024-11-26 18:03:05.227474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.227514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:48.531 [2024-11-26 18:03:05.227526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:16:48.531 [2024-11-26 18:03:05.227538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.227599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.227625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:48.531 [2024-11-26 18:03:05.227643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:48.531 [2024-11-26 18:03:05.227659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.234255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.234290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.531 [2024-11-26 18:03:05.234303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.561 ms 00:16:48.531 [2024-11-26 18:03:05.234315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.234429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.234449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.531 [2024-11-26 18:03:05.234498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:48.531 [2024-11-26 18:03:05.234513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.234543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.234555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.531 [2024-11-26 18:03:05.234566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:48.531 [2024-11-26 18:03:05.234578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.234608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:48.531 [2024-11-26 18:03:05.236221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.236259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.531 [2024-11-26 18:03:05.236273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:16:48.531 [2024-11-26 18:03:05.236283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.236333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.236344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.531 [2024-11-26 18:03:05.236356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:48.531 [2024-11-26 18:03:05.236366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.236398] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:48.531 [2024-11-26 18:03:05.236421] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:48.531 [2024-11-26 18:03:05.236484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:48.531 [2024-11-26 18:03:05.236509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:48.531 [2024-11-26 18:03:05.236580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:48.531 [2024-11-26 18:03:05.236593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.531 [2024-11-26 18:03:05.236608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:48.531 [2024-11-26 18:03:05.236621] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.531 [2024-11-26 18:03:05.236637] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.531 [2024-11-26 18:03:05.236655] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:48.531 [2024-11-26 18:03:05.236677] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.531 [2024-11-26 18:03:05.236694] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:48.531 [2024-11-26 18:03:05.236713] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:48.531 [2024-11-26 18:03:05.236723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.236735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.531 [2024-11-26 18:03:05.236746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:48.531 [2024-11-26 18:03:05.236757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.236818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.531 [2024-11-26 18:03:05.236831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.531 [2024-11-26 18:03:05.236840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:48.531 [2024-11-26 18:03:05.236852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.531 [2024-11-26 18:03:05.236925] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.531 [2024-11-26 18:03:05.236939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.531 [2024-11-26 18:03:05.236956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.531 [2024-11-26 18:03:05.236972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.531 [2024-11-26 18:03:05.236982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.531 [2024-11-26 18:03:05.236994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:48.531 [2024-11-26 18:03:05.237016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.531 [2024-11-26 18:03:05.237025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.531 [2024-11-26 18:03:05.237046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.531 [2024-11-26 18:03:05.237057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:48.531 [2024-11-26 18:03:05.237066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.531 [2024-11-26 18:03:05.237079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.531 [2024-11-26 18:03:05.237088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:48.531 [2024-11-26 18:03:05.237099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.531 [2024-11-26 18:03:05.237119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:48.531 [2024-11-26 18:03:05.237128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:48.531 [2024-11-26 18:03:05.237153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:48.531 [2024-11-26 18:03:05.237164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:48.531 [2024-11-26 18:03:05.237173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.531 [2024-11-26 18:03:05.237185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:48.531 [2024-11-26 18:03:05.237205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.531 [2024-11-26 18:03:05.237214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:48.531 [2024-11-26 18:03:05.237234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.531 [2024-11-26 18:03:05.237245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:48.531 [2024-11-26 18:03:05.237254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:48.531 [2024-11-26 18:03:05.237265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.531 [2024-11-26 18:03:05.237274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:48.532 [2024-11-26 18:03:05.237285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:48.532 [2024-11-26 18:03:05.237294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.532 [2024-11-26 18:03:05.237308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:48.532 [2024-11-26 18:03:05.237317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.532 [2024-11-26 18:03:05.237328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.532 [2024-11-26 18:03:05.237337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:48.532 [2024-11-26 18:03:05.237348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.532 [2024-11-26 18:03:05.237356] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.532 [2024-11-26 18:03:05.237368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.532 [2024-11-26 18:03:05.237378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.532 [2024-11-26 18:03:05.237391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.532 [2024-11-26 18:03:05.237400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.532 [2024-11-26 18:03:05.237415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.532 [2024-11-26 18:03:05.237425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.532 [2024-11-26 18:03:05.237437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.532 [2024-11-26 18:03:05.237446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.532 [2024-11-26 18:03:05.237467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.532 [2024-11-26 18:03:05.237478] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.532 [2024-11-26 18:03:05.237501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.532 [2024-11-26 18:03:05.237512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:48.532 [2024-11-26 18:03:05.237525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:48.532 [2024-11-26 18:03:05.237536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:48.532 [2024-11-26 18:03:05.237549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:48.532 [2024-11-26 18:03:05.237566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:48.532 [2024-11-26 18:03:05.237579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:48.532 [2024-11-26 18:03:05.237589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:48.532 [2024-11-26 18:03:05.237602] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:48.532 [2024-11-26 18:03:05.237612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:48.532 [2024-11-26 18:03:05.237624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:48.532 [2024-11-26 18:03:05.237634] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:48.532 [2024-11-26 18:03:05.237648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:48.532 [2024-11-26 18:03:05.237658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:48.532 [2024-11-26 18:03:05.237671] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.532 [2024-11-26 18:03:05.237681] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.532 [2024-11-26 18:03:05.237698] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.532 [2024-11-26 18:03:05.237708] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.532 [2024-11-26 18:03:05.237721] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.532 [2024-11-26 18:03:05.237731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.532 [2024-11-26 18:03:05.237743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.237753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.532 [2024-11-26 18:03:05.237766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:16:48.532 [2024-11-26 18:03:05.237782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.246120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.246160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.532 [2024-11-26 18:03:05.246178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.295 ms 00:16:48.532 [2024-11-26 18:03:05.246188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.246304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.246317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:48.532 [2024-11-26 18:03:05.246337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:48.532 [2024-11-26 18:03:05.246347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.258584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.258747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.532 [2024-11-26 18:03:05.258779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.221 ms 00:16:48.532 [2024-11-26 18:03:05.258790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.258858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.258870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.532 [2024-11-26 18:03:05.258883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:48.532 [2024-11-26 18:03:05.258894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.259332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.259344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.532 [2024-11-26 18:03:05.259357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:16:48.532 [2024-11-26 18:03:05.259369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.259504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.259519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.532 [2024-11-26 18:03:05.259531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:48.532 [2024-11-26 18:03:05.259541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.266704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.266740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.532 [2024-11-26 18:03:05.266755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.146 ms 00:16:48.532 [2024-11-26 18:03:05.266768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.269322] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:48.532 [2024-11-26 18:03:05.269356] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:48.532 [2024-11-26 18:03:05.269372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.269383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:48.532 [2024-11-26 18:03:05.269396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:16:48.532 [2024-11-26 18:03:05.269406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.282256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.282296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:48.532 [2024-11-26 18:03:05.282312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.791 ms 00:16:48.532 [2024-11-26 18:03:05.282326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.284135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.284167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:48.532 [2024-11-26 18:03:05.284184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:16:48.532 [2024-11-26 18:03:05.284193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.285753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.285785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:48.532 [2024-11-26 18:03:05.285799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:16:48.532 [2024-11-26 18:03:05.285808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.285990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.286004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:48.532 [2024-11-26 18:03:05.286018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:48.532 [2024-11-26 18:03:05.286027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.308628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.308689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:48.532 [2024-11-26 18:03:05.308707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.609 ms 00:16:48.532 [2024-11-26 18:03:05.308721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.532 [2024-11-26 18:03:05.314938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:48.532 [2024-11-26 18:03:05.330739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.532 [2024-11-26 18:03:05.330808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:48.532 [2024-11-26 18:03:05.330823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.965 ms 00:16:48.533 [2024-11-26 18:03:05.330837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.330929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.330944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:48.533 [2024-11-26 18:03:05.330956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:48.533 [2024-11-26 18:03:05.330968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.331019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.331032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:48.533 [2024-11-26 18:03:05.331050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:48.533 [2024-11-26 18:03:05.331063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.333240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.333279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:48.533 [2024-11-26 18:03:05.333290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:16:48.533 [2024-11-26 18:03:05.333311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.333343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.333360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.533 [2024-11-26 18:03:05.333370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:48.533 [2024-11-26 18:03:05.333382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.333420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:48.533 [2024-11-26 18:03:05.333434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.333444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:48.533 [2024-11-26 18:03:05.333473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:48.533 [2024-11-26 18:03:05.333483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.337291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.337326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.533 [2024-11-26 18:03:05.337341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:16:48.533 [2024-11-26 18:03:05.337351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.337421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.533 [2024-11-26 18:03:05.337434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.533 [2024-11-26 18:03:05.337447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:48.533 [2024-11-26 18:03:05.337475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.533 [2024-11-26 18:03:05.338416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.533 [2024-11-26 18:03:05.339354] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.776 ms, result 0 00:16:48.533 [2024-11-26 18:03:05.340372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:48.533 Some configs were skipped because the RPC state that can call them passed over. 00:16:48.533 18:03:05 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:48.799 [2024-11-26 18:03:05.564265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.799 [2024-11-26 18:03:05.564485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:48.799 [2024-11-26 18:03:05.564573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:16:48.799 [2024-11-26 18:03:05.564615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.799 [2024-11-26 18:03:05.564681] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.072 ms, result 0 00:16:48.799 true 00:16:48.799 18:03:05 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:49.058 [2024-11-26 18:03:05.767839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.058 [2024-11-26 18:03:05.768043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:49.058 [2024-11-26 18:03:05.768127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:16:49.058 [2024-11-26 18:03:05.768163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.058 [2024-11-26 18:03:05.768236] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.596 ms, result 0 00:16:49.058 true 00:16:49.058 18:03:05 -- ftl/trim.sh@102 -- # killprocess 83866 00:16:49.058 18:03:05 -- common/autotest_common.sh@936 -- # '[' -z 83866 ']' 00:16:49.058 18:03:05 -- common/autotest_common.sh@940 -- # kill -0 83866 00:16:49.058 18:03:05 -- common/autotest_common.sh@941 -- # uname 00:16:49.058 18:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.058 18:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83866 00:16:49.058 killing process with pid 83866 00:16:49.058 18:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:49.058 18:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:49.058 18:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83866' 00:16:49.058 18:03:05 -- common/autotest_common.sh@955 -- # kill 83866 00:16:49.058 18:03:05 -- common/autotest_common.sh@960 -- # wait 83866 00:16:49.058 [2024-11-26 18:03:05.979900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.058 [2024-11-26 18:03:05.979968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.058 [2024-11-26 18:03:05.979985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.058 [2024-11-26 18:03:05.980016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.058 [2024-11-26 18:03:05.980040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.058 [2024-11-26 18:03:05.980729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.058 [2024-11-26 18:03:05.980747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.058 [2024-11-26 18:03:05.980760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:16:49.058 [2024-11-26 18:03:05.980770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.058 [2024-11-26 18:03:05.981023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.058 [2024-11-26 18:03:05.981039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.058 [2024-11-26 18:03:05.981060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:16:49.058 [2024-11-26 18:03:05.981073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.319 [2024-11-26 18:03:05.984490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.319 [2024-11-26 18:03:05.984526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.319 [2024-11-26 18:03:05.984542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:16:49.319 [2024-11-26 18:03:05.984551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.319 [2024-11-26 18:03:05.990063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.319 [2024-11-26 18:03:05.990098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:49.319 [2024-11-26 18:03:05.990117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.476 ms 00:16:49.319 [2024-11-26 18:03:05.990127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.319 [2024-11-26 18:03:05.991612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.319 [2024-11-26 18:03:05.991647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.319 [2024-11-26 18:03:05.991661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:16:49.319 [2024-11-26 18:03:05.991670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.319 [2024-11-26 18:03:05.995590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.319 [2024-11-26 18:03:05.995625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.319 [2024-11-26 18:03:05.995640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:16:49.319 [2024-11-26 18:03:05.995650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.319 [2024-11-26 18:03:05.995763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.320 [2024-11-26 18:03:05.995775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.320 [2024-11-26 18:03:05.995787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:49.320 [2024-11-26 18:03:05.995797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.320 [2024-11-26 18:03:05.997717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.320 [2024-11-26 18:03:05.997862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:49.320 [2024-11-26 18:03:05.997890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:16:49.320 [2024-11-26 18:03:05.997900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.320 [2024-11-26 18:03:05.999469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.320 [2024-11-26 18:03:05.999502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:49.320 [2024-11-26 18:03:05.999518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:16:49.320 [2024-11-26 18:03:05.999527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.320 [2024-11-26 18:03:06.000669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.320 [2024-11-26 18:03:06.000702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.320 [2024-11-26 18:03:06.000716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:16:49.320 [2024-11-26 18:03:06.000725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.320 [2024-11-26 18:03:06.001935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.320 [2024-11-26 18:03:06.001967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.320 [2024-11-26 18:03:06.001981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:16:49.320 [2024-11-26 18:03:06.001991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.320 [2024-11-26 18:03:06.002023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.320 [2024-11-26 18:03:06.002048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.320 [2024-11-26 18:03:06.002902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.002994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.321 [2024-11-26 18:03:06.003271] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.321 [2024-11-26 18:03:06.003283] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:49.321 [2024-11-26 18:03:06.003294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.321 [2024-11-26 18:03:06.003305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.321 [2024-11-26 18:03:06.003315] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.321 [2024-11-26 18:03:06.003327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.321 [2024-11-26 18:03:06.003336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.321 [2024-11-26 18:03:06.003348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.321 [2024-11-26 18:03:06.003358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.321 [2024-11-26 18:03:06.003369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.321 [2024-11-26 18:03:06.003377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.321 [2024-11-26 18:03:06.003389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.321 [2024-11-26 18:03:06.003402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.321 [2024-11-26 18:03:06.003417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:16:49.321 [2024-11-26 18:03:06.003426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.005163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.321 [2024-11-26 18:03:06.005186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.321 [2024-11-26 18:03:06.005199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:16:49.321 [2024-11-26 18:03:06.005209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.005284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.321 [2024-11-26 18:03:06.005295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.321 [2024-11-26 18:03:06.005308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:49.321 [2024-11-26 18:03:06.005317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.012279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.012414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.321 [2024-11-26 18:03:06.012440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.012497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.012584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.012596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.321 [2024-11-26 18:03:06.012612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.012621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.012675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.012687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.321 [2024-11-26 18:03:06.012699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.012709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.012732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.012745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.321 [2024-11-26 18:03:06.012757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.012766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.025380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.025421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.321 [2024-11-26 18:03:06.025436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.025446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.029976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.321 [2024-11-26 18:03:06.030034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.321 [2024-11-26 18:03:06.030106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.321 [2024-11-26 18:03:06.030191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.321 [2024-11-26 18:03:06.030306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.321 [2024-11-26 18:03:06.030382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.321 [2024-11-26 18:03:06.030469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.321 [2024-11-26 18:03:06.030545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.321 [2024-11-26 18:03:06.030560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.321 [2024-11-26 18:03:06.030570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.321 [2024-11-26 18:03:06.030716] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.864 ms, result 0 00:16:49.581 18:03:06 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:49.581 [2024-11-26 18:03:06.348621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:49.581 [2024-11-26 18:03:06.348904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83902 ] 00:16:49.581 [2024-11-26 18:03:06.499905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.840 [2024-11-26 18:03:06.540363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.840 [2024-11-26 18:03:06.641216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.840 [2024-11-26 18:03:06.641506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:50.099 [2024-11-26 18:03:06.792621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.792836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:50.099 [2024-11-26 18:03:06.792935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:50.099 [2024-11-26 18:03:06.792974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.795419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.795577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.099 [2024-11-26 18:03:06.795598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:16:50.099 [2024-11-26 18:03:06.795610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.795738] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:50.099 [2024-11-26 18:03:06.795962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:50.099 [2024-11-26 18:03:06.795979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.795989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.099 [2024-11-26 18:03:06.796001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:50.099 [2024-11-26 18:03:06.796011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.797475] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:50.099 [2024-11-26 18:03:06.799958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.800003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:50.099 [2024-11-26 18:03:06.800016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:16:50.099 [2024-11-26 18:03:06.800026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.800089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.800102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:50.099 [2024-11-26 18:03:06.800120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:50.099 [2024-11-26 18:03:06.800137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.806764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.806793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.099 [2024-11-26 18:03:06.806805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.592 ms 00:16:50.099 [2024-11-26 18:03:06.806824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.806932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.806946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.099 [2024-11-26 18:03:06.806960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:50.099 [2024-11-26 18:03:06.806970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.807005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.807016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:50.099 [2024-11-26 18:03:06.807027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:50.099 [2024-11-26 18:03:06.807040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.099 [2024-11-26 18:03:06.807073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:50.099 [2024-11-26 18:03:06.808693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.099 [2024-11-26 18:03:06.808720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.099 [2024-11-26 18:03:06.808743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:16:50.100 [2024-11-26 18:03:06.808753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.100 [2024-11-26 18:03:06.808801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.100 [2024-11-26 18:03:06.808816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:50.100 [2024-11-26 18:03:06.808826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:50.100 [2024-11-26 18:03:06.808836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.100 [2024-11-26 18:03:06.808856] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:50.100 [2024-11-26 18:03:06.808880] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:50.100 [2024-11-26 18:03:06.808913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:50.100 [2024-11-26 18:03:06.808938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:50.100 [2024-11-26 18:03:06.809008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:50.100 [2024-11-26 18:03:06.809022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:50.100 [2024-11-26 18:03:06.809034] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:50.100 [2024-11-26 18:03:06.809054] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809065] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809077] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:50.100 [2024-11-26 18:03:06.809094] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:50.100 [2024-11-26 18:03:06.809104] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:50.100 [2024-11-26 18:03:06.809117] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:50.100 [2024-11-26 18:03:06.809128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.100 [2024-11-26 18:03:06.809138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:50.100 [2024-11-26 18:03:06.809148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:50.100 [2024-11-26 18:03:06.809158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.100 [2024-11-26 18:03:06.809219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.100 [2024-11-26 18:03:06.809230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:50.100 [2024-11-26 18:03:06.809247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:50.100 [2024-11-26 18:03:06.809262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.100 [2024-11-26 18:03:06.809334] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:50.100 [2024-11-26 18:03:06.809352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:50.100 [2024-11-26 18:03:06.809363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:50.100 [2024-11-26 18:03:06.809392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:50.100 [2024-11-26 18:03:06.809421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.100 [2024-11-26 18:03:06.809440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:50.100 [2024-11-26 18:03:06.809449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:50.100 [2024-11-26 18:03:06.809469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:50.100 [2024-11-26 18:03:06.809478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:50.100 [2024-11-26 18:03:06.809490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:50.100 [2024-11-26 18:03:06.809502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:50.100 [2024-11-26 18:03:06.809521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:50.100 [2024-11-26 18:03:06.809530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:50.100 [2024-11-26 18:03:06.809548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:50.100 [2024-11-26 18:03:06.809558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:50.100 [2024-11-26 18:03:06.809576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:50.100 [2024-11-26 18:03:06.809604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:50.100 [2024-11-26 18:03:06.809631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:50.100 [2024-11-26 18:03:06.809663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:50.100 [2024-11-26 18:03:06.809704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.100 [2024-11-26 18:03:06.809722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:50.100 [2024-11-26 18:03:06.809734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:50.100 [2024-11-26 18:03:06.809743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:50.100 [2024-11-26 18:03:06.809752] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:50.100 [2024-11-26 18:03:06.809768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:50.100 [2024-11-26 18:03:06.809778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:50.100 [2024-11-26 18:03:06.809805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:50.100 [2024-11-26 18:03:06.809814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:50.100 [2024-11-26 18:03:06.809823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:50.100 [2024-11-26 18:03:06.809836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:50.100 [2024-11-26 18:03:06.809846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:50.100 [2024-11-26 18:03:06.809855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:50.100 [2024-11-26 18:03:06.809865] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:50.100 [2024-11-26 18:03:06.809877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.100 [2024-11-26 18:03:06.809891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:50.100 [2024-11-26 18:03:06.809901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:50.100 [2024-11-26 18:03:06.809911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:50.100 [2024-11-26 18:03:06.809922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:50.100 [2024-11-26 18:03:06.809932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:50.100 [2024-11-26 18:03:06.809942] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:50.100 [2024-11-26 18:03:06.809952] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:50.100 [2024-11-26 18:03:06.809962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:50.100 [2024-11-26 18:03:06.809972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:50.100 [2024-11-26 18:03:06.809982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:50.100 [2024-11-26 18:03:06.809993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:50.100 [2024-11-26 18:03:06.810006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:50.100 [2024-11-26 18:03:06.810016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:50.100 [2024-11-26 18:03:06.810027] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:50.100 [2024-11-26 18:03:06.810044] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:50.100 [2024-11-26 18:03:06.810056] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:50.100 [2024-11-26 18:03:06.810066] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:50.101 [2024-11-26 18:03:06.810076] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:50.101 [2024-11-26 18:03:06.810086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:50.101 [2024-11-26 18:03:06.810097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.810116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:50.101 [2024-11-26 18:03:06.810127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:16:50.101 [2024-11-26 18:03:06.810137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.818487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.818527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.101 [2024-11-26 18:03:06.818547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.309 ms 00:16:50.101 [2024-11-26 18:03:06.818561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.818677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.818689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:50.101 [2024-11-26 18:03:06.818701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:50.101 [2024-11-26 18:03:06.818711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.837978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.838029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.101 [2024-11-26 18:03:06.838051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.273 ms 00:16:50.101 [2024-11-26 18:03:06.838077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.838169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.838185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:50.101 [2024-11-26 18:03:06.838199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:50.101 [2024-11-26 18:03:06.838212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.838702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.838720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:50.101 [2024-11-26 18:03:06.838734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:16:50.101 [2024-11-26 18:03:06.838746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.838893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.838910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:50.101 [2024-11-26 18:03:06.838924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:50.101 [2024-11-26 18:03:06.838936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.846248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.846286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:50.101 [2024-11-26 18:03:06.846299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.293 ms 00:16:50.101 [2024-11-26 18:03:06.846309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.848904] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:50.101 [2024-11-26 18:03:06.849042] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:50.101 [2024-11-26 18:03:06.849063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.849074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:50.101 [2024-11-26 18:03:06.849084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:16:50.101 [2024-11-26 18:03:06.849094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.861961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.861998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:50.101 [2024-11-26 18:03:06.862012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.786 ms 00:16:50.101 [2024-11-26 18:03:06.862023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.863834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.863968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:50.101 [2024-11-26 18:03:06.863987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:16:50.101 [2024-11-26 18:03:06.863997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.865414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.865448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:50.101 [2024-11-26 18:03:06.865473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:16:50.101 [2024-11-26 18:03:06.865483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.865675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.865691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:50.101 [2024-11-26 18:03:06.865702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:50.101 [2024-11-26 18:03:06.865712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.888150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.888203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:50.101 [2024-11-26 18:03:06.888219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:16:50.101 [2024-11-26 18:03:06.888230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.894520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:50.101 [2024-11-26 18:03:06.910713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.910760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:50.101 [2024-11-26 18:03:06.910774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.425 ms 00:16:50.101 [2024-11-26 18:03:06.910797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.910890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.910907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:50.101 [2024-11-26 18:03:06.910918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:50.101 [2024-11-26 18:03:06.910929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.910980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.910992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:50.101 [2024-11-26 18:03:06.911003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:50.101 [2024-11-26 18:03:06.911013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.913081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.913108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:50.101 [2024-11-26 18:03:06.913123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:16:50.101 [2024-11-26 18:03:06.913133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.913179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.913192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:50.101 [2024-11-26 18:03:06.913202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:50.101 [2024-11-26 18:03:06.913216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.913252] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:50.101 [2024-11-26 18:03:06.913270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.913280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:50.101 [2024-11-26 18:03:06.913290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:50.101 [2024-11-26 18:03:06.913310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.917051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.917087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:50.101 [2024-11-26 18:03:06.917100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:16:50.101 [2024-11-26 18:03:06.917110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.917191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.101 [2024-11-26 18:03:06.917203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:50.101 [2024-11-26 18:03:06.917214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:50.101 [2024-11-26 18:03:06.917224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.101 [2024-11-26 18:03:06.918282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.101 [2024-11-26 18:03:06.919351] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.591 ms, result 0 00:16:50.101 [2024-11-26 18:03:06.920182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:50.101 [2024-11-26 18:03:06.928575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.479  [2024-11-26T18:03:09.341Z] Copying: 28/256 [MB] (28 MBps) [2024-11-26T18:03:10.308Z] Copying: 55/256 [MB] (26 MBps) [2024-11-26T18:03:11.246Z] Copying: 81/256 [MB] (25 MBps) [2024-11-26T18:03:12.182Z] Copying: 107/256 [MB] (26 MBps) [2024-11-26T18:03:13.121Z] Copying: 134/256 [MB] (27 MBps) [2024-11-26T18:03:14.075Z] Copying: 162/256 [MB] (27 MBps) [2024-11-26T18:03:15.014Z] Copying: 188/256 [MB] (26 MBps) [2024-11-26T18:03:16.394Z] Copying: 214/256 [MB] (25 MBps) [2024-11-26T18:03:16.653Z] Copying: 240/256 [MB] (26 MBps) [2024-11-26T18:03:16.914Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-26 18:03:16.708086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.988 [2024-11-26 18:03:16.709499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.709532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:59.988 [2024-11-26 18:03:16.709551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.988 [2024-11-26 18:03:16.709562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.709586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:59.988 [2024-11-26 18:03:16.710255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.710272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:59.988 [2024-11-26 18:03:16.710283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:16:59.988 [2024-11-26 18:03:16.710293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.710553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.710567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:59.988 [2024-11-26 18:03:16.710577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:16:59.988 [2024-11-26 18:03:16.710587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.713448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.713487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:59.988 [2024-11-26 18:03:16.713499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:16:59.988 [2024-11-26 18:03:16.713509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.719169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.719212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:59.988 [2024-11-26 18:03:16.719226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:16:59.988 [2024-11-26 18:03:16.719236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.721199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.721362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:59.988 [2024-11-26 18:03:16.721386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:16:59.988 [2024-11-26 18:03:16.721395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.725076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.725119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:59.988 [2024-11-26 18:03:16.725132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:16:59.988 [2024-11-26 18:03:16.725142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.725271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.725285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:59.988 [2024-11-26 18:03:16.725304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:59.988 [2024-11-26 18:03:16.725314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.727334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.727373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:59.988 [2024-11-26 18:03:16.727385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:16:59.988 [2024-11-26 18:03:16.727395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.728961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.729000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:59.988 [2024-11-26 18:03:16.729011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:16:59.988 [2024-11-26 18:03:16.729021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.730257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.730292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:59.988 [2024-11-26 18:03:16.730304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:16:59.988 [2024-11-26 18:03:16.730313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.731263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.988 [2024-11-26 18:03:16.731412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:59.988 [2024-11-26 18:03:16.731434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:16:59.988 [2024-11-26 18:03:16.731445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.988 [2024-11-26 18:03:16.731502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:59.988 [2024-11-26 18:03:16.731527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:59.988 [2024-11-26 18:03:16.731658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.731995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:59.989 [2024-11-26 18:03:16.732605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:59.990 [2024-11-26 18:03:16.732615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:59.990 [2024-11-26 18:03:16.732633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:59.990 [2024-11-26 18:03:16.732643] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e724961a-19a7-434a-83e9-346380d427cd 00:16:59.990 [2024-11-26 18:03:16.732654] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:59.990 [2024-11-26 18:03:16.732671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:59.990 [2024-11-26 18:03:16.732681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:59.990 [2024-11-26 18:03:16.732692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:59.990 [2024-11-26 18:03:16.732702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:59.990 [2024-11-26 18:03:16.732712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:59.990 [2024-11-26 18:03:16.732721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:59.990 [2024-11-26 18:03:16.732731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:59.990 [2024-11-26 18:03:16.732740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:59.990 [2024-11-26 18:03:16.732750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.990 [2024-11-26 18:03:16.732759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:59.990 [2024-11-26 18:03:16.732776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:16:59.990 [2024-11-26 18:03:16.732786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.734812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.990 [2024-11-26 18:03:16.734873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:59.990 [2024-11-26 18:03:16.734918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:16:59.990 [2024-11-26 18:03:16.734959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.735074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.990 [2024-11-26 18:03:16.735277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:59.990 [2024-11-26 18:03:16.735319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:59.990 [2024-11-26 18:03:16.735351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.742304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.742474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.990 [2024-11-26 18:03:16.742499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.742509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.742619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.742636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.990 [2024-11-26 18:03:16.742647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.742657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.742715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.742728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.990 [2024-11-26 18:03:16.742751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.742762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.742788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.742800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.990 [2024-11-26 18:03:16.742813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.742823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.758695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.758929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.990 [2024-11-26 18:03:16.758955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.758967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.764887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.764943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.990 [2024-11-26 18:03:16.764956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.764967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.990 [2024-11-26 18:03:16.765067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.990 [2024-11-26 18:03:16.765133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.990 [2024-11-26 18:03:16.765266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:59.990 [2024-11-26 18:03:16.765349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.990 [2024-11-26 18:03:16.765425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.990 [2024-11-26 18:03:16.765741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.990 [2024-11-26 18:03:16.765752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.990 [2024-11-26 18:03:16.765766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.990 [2024-11-26 18:03:16.765906] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.473 ms, result 0 00:17:00.249 00:17:00.249 00:17:00.249 18:03:17 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:00.814 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:00.814 18:03:17 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:00.814 18:03:17 -- ftl/trim.sh@109 -- # fio_kill 00:17:00.814 18:03:17 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:00.814 18:03:17 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:00.814 18:03:17 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:00.814 18:03:17 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:00.814 18:03:17 -- ftl/trim.sh@20 -- # killprocess 83866 00:17:00.814 18:03:17 -- common/autotest_common.sh@936 -- # '[' -z 83866 ']' 00:17:00.814 18:03:17 -- common/autotest_common.sh@940 -- # kill -0 83866 00:17:00.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83866) - No such process 00:17:00.814 Process with pid 83866 is not found 00:17:00.814 18:03:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83866 is not found' 00:17:00.814 00:17:00.814 real 0m53.096s 00:17:00.814 user 1m14.740s 00:17:00.814 sys 0m6.275s 00:17:00.814 ************************************ 00:17:00.814 END TEST ftl_trim 00:17:00.814 ************************************ 00:17:00.814 18:03:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:00.814 18:03:17 -- common/autotest_common.sh@10 -- # set +x 00:17:00.814 18:03:17 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:00.814 18:03:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:00.814 18:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.814 18:03:17 -- common/autotest_common.sh@10 -- # set +x 00:17:00.814 ************************************ 00:17:00.814 START TEST ftl_restore 00:17:00.814 ************************************ 00:17:00.814 18:03:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:01.074 * Looking for test storage... 00:17:01.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.074 18:03:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:01.074 18:03:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:01.074 18:03:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:01.074 18:03:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:01.074 18:03:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:01.074 18:03:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:01.074 18:03:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:01.074 18:03:17 -- scripts/common.sh@335 -- # IFS=.-: 00:17:01.074 18:03:17 -- scripts/common.sh@335 -- # read -ra ver1 00:17:01.074 18:03:17 -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.074 18:03:17 -- scripts/common.sh@336 -- # read -ra ver2 00:17:01.074 18:03:17 -- scripts/common.sh@337 -- # local 'op=<' 00:17:01.074 18:03:17 -- scripts/common.sh@339 -- # ver1_l=2 00:17:01.074 18:03:17 -- scripts/common.sh@340 -- # ver2_l=1 00:17:01.074 18:03:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:01.074 18:03:17 -- scripts/common.sh@343 -- # case "$op" in 00:17:01.074 18:03:17 -- scripts/common.sh@344 -- # : 1 00:17:01.074 18:03:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:01.074 18:03:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.074 18:03:17 -- scripts/common.sh@364 -- # decimal 1 00:17:01.074 18:03:17 -- scripts/common.sh@352 -- # local d=1 00:17:01.074 18:03:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.074 18:03:17 -- scripts/common.sh@354 -- # echo 1 00:17:01.074 18:03:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:01.074 18:03:17 -- scripts/common.sh@365 -- # decimal 2 00:17:01.074 18:03:17 -- scripts/common.sh@352 -- # local d=2 00:17:01.074 18:03:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.074 18:03:17 -- scripts/common.sh@354 -- # echo 2 00:17:01.074 18:03:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:01.074 18:03:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:01.074 18:03:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:01.074 18:03:17 -- scripts/common.sh@367 -- # return 0 00:17:01.074 18:03:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.074 18:03:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:01.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.074 --rc genhtml_branch_coverage=1 00:17:01.074 --rc genhtml_function_coverage=1 00:17:01.074 --rc genhtml_legend=1 00:17:01.074 --rc geninfo_all_blocks=1 00:17:01.074 --rc geninfo_unexecuted_blocks=1 00:17:01.074 00:17:01.074 ' 00:17:01.074 18:03:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:01.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.074 --rc genhtml_branch_coverage=1 00:17:01.074 --rc genhtml_function_coverage=1 00:17:01.074 --rc genhtml_legend=1 00:17:01.074 --rc geninfo_all_blocks=1 00:17:01.074 --rc geninfo_unexecuted_blocks=1 00:17:01.074 00:17:01.074 ' 00:17:01.074 18:03:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:01.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.074 --rc genhtml_branch_coverage=1 00:17:01.074 --rc genhtml_function_coverage=1 00:17:01.074 --rc genhtml_legend=1 00:17:01.074 --rc geninfo_all_blocks=1 00:17:01.074 --rc geninfo_unexecuted_blocks=1 00:17:01.074 00:17:01.074 ' 00:17:01.074 18:03:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:01.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.074 --rc genhtml_branch_coverage=1 00:17:01.074 --rc genhtml_function_coverage=1 00:17:01.074 --rc genhtml_legend=1 00:17:01.074 --rc geninfo_all_blocks=1 00:17:01.074 --rc geninfo_unexecuted_blocks=1 00:17:01.074 00:17:01.074 ' 00:17:01.074 18:03:17 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:01.074 18:03:17 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:01.074 18:03:17 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.074 18:03:17 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.074 18:03:17 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:01.074 18:03:17 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:01.074 18:03:17 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.074 18:03:17 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:01.074 18:03:17 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:01.074 18:03:17 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.074 18:03:17 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.074 18:03:17 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:01.074 18:03:17 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:01.074 18:03:17 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.074 18:03:17 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.074 18:03:17 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:01.074 18:03:17 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:01.074 18:03:17 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.074 18:03:17 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.074 18:03:17 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:01.074 18:03:17 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:01.074 18:03:17 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.074 18:03:17 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.074 18:03:17 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.074 18:03:17 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.074 18:03:17 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:01.074 18:03:17 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:01.074 18:03:17 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.074 18:03:17 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.074 18:03:17 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.074 18:03:17 -- ftl/restore.sh@13 -- # mktemp -d 00:17:01.074 18:03:17 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.xTOr45vsT8 00:17:01.074 18:03:17 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:01.074 18:03:17 -- ftl/restore.sh@16 -- # case $opt in 00:17:01.074 18:03:17 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:01.074 18:03:17 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:01.074 18:03:17 -- ftl/restore.sh@23 -- # shift 2 00:17:01.074 18:03:17 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:01.074 18:03:17 -- ftl/restore.sh@25 -- # timeout=240 00:17:01.074 18:03:17 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:01.074 18:03:17 -- ftl/restore.sh@39 -- # svcpid=84090 00:17:01.074 18:03:17 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.074 18:03:17 -- ftl/restore.sh@41 -- # waitforlisten 84090 00:17:01.074 18:03:17 -- common/autotest_common.sh@829 -- # '[' -z 84090 ']' 00:17:01.074 18:03:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.074 18:03:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.074 18:03:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.074 18:03:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.074 18:03:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.074 [2024-11-26 18:03:17.985011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:01.074 [2024-11-26 18:03:17.985131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84090 ] 00:17:01.346 [2024-11-26 18:03:18.133668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.346 [2024-11-26 18:03:18.174613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.346 [2024-11-26 18:03:18.174810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.930 18:03:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.930 18:03:18 -- common/autotest_common.sh@862 -- # return 0 00:17:01.930 18:03:18 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:01.930 18:03:18 -- ftl/common.sh@54 -- # local name=nvme0 00:17:01.930 18:03:18 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:01.931 18:03:18 -- ftl/common.sh@56 -- # local size=103424 00:17:01.931 18:03:18 -- ftl/common.sh@59 -- # local base_bdev 00:17:01.931 18:03:18 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:02.190 18:03:19 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:02.190 18:03:19 -- ftl/common.sh@62 -- # local base_size 00:17:02.190 18:03:19 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:02.190 18:03:19 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:02.190 18:03:19 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:02.190 18:03:19 -- common/autotest_common.sh@1369 -- # local bs 00:17:02.190 18:03:19 -- common/autotest_common.sh@1370 -- # local nb 00:17:02.190 18:03:19 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:02.462 18:03:19 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:02.462 { 00:17:02.462 "name": "nvme0n1", 00:17:02.462 "aliases": [ 00:17:02.462 "c58dc197-cbfa-4b0c-90da-6162ec224f59" 00:17:02.462 ], 00:17:02.462 "product_name": "NVMe disk", 00:17:02.462 "block_size": 4096, 00:17:02.462 "num_blocks": 1310720, 00:17:02.462 "uuid": "c58dc197-cbfa-4b0c-90da-6162ec224f59", 00:17:02.462 "assigned_rate_limits": { 00:17:02.462 "rw_ios_per_sec": 0, 00:17:02.462 "rw_mbytes_per_sec": 0, 00:17:02.462 "r_mbytes_per_sec": 0, 00:17:02.462 "w_mbytes_per_sec": 0 00:17:02.462 }, 00:17:02.462 "claimed": true, 00:17:02.462 "claim_type": "read_many_write_one", 00:17:02.462 "zoned": false, 00:17:02.462 "supported_io_types": { 00:17:02.462 "read": true, 00:17:02.462 "write": true, 00:17:02.462 "unmap": true, 00:17:02.462 "write_zeroes": true, 00:17:02.462 "flush": true, 00:17:02.462 "reset": true, 00:17:02.462 "compare": true, 00:17:02.462 "compare_and_write": false, 00:17:02.462 "abort": true, 00:17:02.462 "nvme_admin": true, 00:17:02.462 "nvme_io": true 00:17:02.462 }, 00:17:02.462 "driver_specific": { 00:17:02.462 "nvme": [ 00:17:02.462 { 00:17:02.462 "pci_address": "0000:00:07.0", 00:17:02.462 "trid": { 00:17:02.462 "trtype": "PCIe", 00:17:02.462 "traddr": "0000:00:07.0" 00:17:02.462 }, 00:17:02.462 "ctrlr_data": { 00:17:02.462 "cntlid": 0, 00:17:02.462 "vendor_id": "0x1b36", 00:17:02.462 "model_number": "QEMU NVMe Ctrl", 00:17:02.462 "serial_number": "12341", 00:17:02.462 "firmware_revision": "8.0.0", 00:17:02.462 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:02.462 "oacs": { 00:17:02.462 "security": 0, 00:17:02.462 "format": 1, 00:17:02.462 "firmware": 0, 00:17:02.462 "ns_manage": 1 00:17:02.462 }, 00:17:02.462 "multi_ctrlr": false, 00:17:02.462 "ana_reporting": false 00:17:02.462 }, 00:17:02.462 "vs": { 00:17:02.462 "nvme_version": "1.4" 00:17:02.462 }, 00:17:02.462 "ns_data": { 00:17:02.462 "id": 1, 00:17:02.462 "can_share": false 00:17:02.462 } 00:17:02.462 } 00:17:02.462 ], 00:17:02.462 "mp_policy": "active_passive" 00:17:02.462 } 00:17:02.462 } 00:17:02.462 ]' 00:17:02.462 18:03:19 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:02.462 18:03:19 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:02.462 18:03:19 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:02.721 18:03:19 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:02.721 18:03:19 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:02.721 18:03:19 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:02.721 18:03:19 -- ftl/common.sh@63 -- # base_size=5120 00:17:02.721 18:03:19 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:02.721 18:03:19 -- ftl/common.sh@67 -- # clear_lvols 00:17:02.721 18:03:19 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:02.721 18:03:19 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:02.721 18:03:19 -- ftl/common.sh@28 -- # stores=63c94432-7008-451d-9258-795027ee739a 00:17:02.721 18:03:19 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:02.721 18:03:19 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63c94432-7008-451d-9258-795027ee739a 00:17:02.980 18:03:19 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:03.238 18:03:20 -- ftl/common.sh@68 -- # lvs=53078330-6434-4d0c-9a59-26d3452ec53b 00:17:03.238 18:03:20 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53078330-6434-4d0c-9a59-26d3452ec53b 00:17:03.496 18:03:20 -- ftl/restore.sh@43 -- # split_bdev=d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.496 18:03:20 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:03.496 18:03:20 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.496 18:03:20 -- ftl/common.sh@35 -- # local name=nvc0 00:17:03.496 18:03:20 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:03.496 18:03:20 -- ftl/common.sh@37 -- # local base_bdev=d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.496 18:03:20 -- ftl/common.sh@38 -- # local cache_size= 00:17:03.496 18:03:20 -- ftl/common.sh@41 -- # get_bdev_size d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.496 18:03:20 -- common/autotest_common.sh@1367 -- # local bdev_name=d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.496 18:03:20 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:03.496 18:03:20 -- common/autotest_common.sh@1369 -- # local bs 00:17:03.496 18:03:20 -- common/autotest_common.sh@1370 -- # local nb 00:17:03.496 18:03:20 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:03.755 18:03:20 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:03.755 { 00:17:03.755 "name": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:03.755 "aliases": [ 00:17:03.755 "lvs/nvme0n1p0" 00:17:03.755 ], 00:17:03.755 "product_name": "Logical Volume", 00:17:03.755 "block_size": 4096, 00:17:03.755 "num_blocks": 26476544, 00:17:03.755 "uuid": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:03.755 "assigned_rate_limits": { 00:17:03.755 "rw_ios_per_sec": 0, 00:17:03.755 "rw_mbytes_per_sec": 0, 00:17:03.755 "r_mbytes_per_sec": 0, 00:17:03.755 "w_mbytes_per_sec": 0 00:17:03.755 }, 00:17:03.755 "claimed": false, 00:17:03.755 "zoned": false, 00:17:03.755 "supported_io_types": { 00:17:03.755 "read": true, 00:17:03.755 "write": true, 00:17:03.755 "unmap": true, 00:17:03.755 "write_zeroes": true, 00:17:03.755 "flush": false, 00:17:03.755 "reset": true, 00:17:03.755 "compare": false, 00:17:03.755 "compare_and_write": false, 00:17:03.755 "abort": false, 00:17:03.755 "nvme_admin": false, 00:17:03.755 "nvme_io": false 00:17:03.755 }, 00:17:03.755 "driver_specific": { 00:17:03.755 "lvol": { 00:17:03.755 "lvol_store_uuid": "53078330-6434-4d0c-9a59-26d3452ec53b", 00:17:03.755 "base_bdev": "nvme0n1", 00:17:03.755 "thin_provision": true, 00:17:03.755 "snapshot": false, 00:17:03.755 "clone": false, 00:17:03.755 "esnap_clone": false 00:17:03.755 } 00:17:03.755 } 00:17:03.755 } 00:17:03.755 ]' 00:17:03.755 18:03:20 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:03.755 18:03:20 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:03.755 18:03:20 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:03.755 18:03:20 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:03.755 18:03:20 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:03.755 18:03:20 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:03.755 18:03:20 -- ftl/common.sh@41 -- # local base_size=5171 00:17:03.755 18:03:20 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:03.756 18:03:20 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:04.014 18:03:20 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:04.014 18:03:20 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:04.014 18:03:20 -- ftl/common.sh@48 -- # get_bdev_size d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.014 18:03:20 -- common/autotest_common.sh@1367 -- # local bdev_name=d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.014 18:03:20 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:04.014 18:03:20 -- common/autotest_common.sh@1369 -- # local bs 00:17:04.014 18:03:20 -- common/autotest_common.sh@1370 -- # local nb 00:17:04.014 18:03:20 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.274 18:03:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:04.274 { 00:17:04.274 "name": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:04.274 "aliases": [ 00:17:04.274 "lvs/nvme0n1p0" 00:17:04.274 ], 00:17:04.274 "product_name": "Logical Volume", 00:17:04.274 "block_size": 4096, 00:17:04.274 "num_blocks": 26476544, 00:17:04.274 "uuid": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:04.274 "assigned_rate_limits": { 00:17:04.274 "rw_ios_per_sec": 0, 00:17:04.274 "rw_mbytes_per_sec": 0, 00:17:04.274 "r_mbytes_per_sec": 0, 00:17:04.274 "w_mbytes_per_sec": 0 00:17:04.274 }, 00:17:04.274 "claimed": false, 00:17:04.274 "zoned": false, 00:17:04.274 "supported_io_types": { 00:17:04.274 "read": true, 00:17:04.274 "write": true, 00:17:04.274 "unmap": true, 00:17:04.274 "write_zeroes": true, 00:17:04.274 "flush": false, 00:17:04.274 "reset": true, 00:17:04.274 "compare": false, 00:17:04.274 "compare_and_write": false, 00:17:04.274 "abort": false, 00:17:04.274 "nvme_admin": false, 00:17:04.274 "nvme_io": false 00:17:04.274 }, 00:17:04.274 "driver_specific": { 00:17:04.274 "lvol": { 00:17:04.274 "lvol_store_uuid": "53078330-6434-4d0c-9a59-26d3452ec53b", 00:17:04.274 "base_bdev": "nvme0n1", 00:17:04.274 "thin_provision": true, 00:17:04.274 "snapshot": false, 00:17:04.274 "clone": false, 00:17:04.274 "esnap_clone": false 00:17:04.274 } 00:17:04.274 } 00:17:04.274 } 00:17:04.274 ]' 00:17:04.274 18:03:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:04.274 18:03:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:04.274 18:03:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:04.274 18:03:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:04.274 18:03:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:04.274 18:03:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:04.274 18:03:21 -- ftl/common.sh@48 -- # cache_size=5171 00:17:04.274 18:03:21 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:04.532 18:03:21 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:04.532 18:03:21 -- ftl/restore.sh@48 -- # get_bdev_size d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.532 18:03:21 -- common/autotest_common.sh@1367 -- # local bdev_name=d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.532 18:03:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:04.532 18:03:21 -- common/autotest_common.sh@1369 -- # local bs 00:17:04.532 18:03:21 -- common/autotest_common.sh@1370 -- # local nb 00:17:04.532 18:03:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5f989e1-96e1-4e7c-b6b2-f91716f81b4c 00:17:04.791 18:03:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:04.791 { 00:17:04.791 "name": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:04.791 "aliases": [ 00:17:04.791 "lvs/nvme0n1p0" 00:17:04.791 ], 00:17:04.791 "product_name": "Logical Volume", 00:17:04.791 "block_size": 4096, 00:17:04.791 "num_blocks": 26476544, 00:17:04.791 "uuid": "d5f989e1-96e1-4e7c-b6b2-f91716f81b4c", 00:17:04.791 "assigned_rate_limits": { 00:17:04.791 "rw_ios_per_sec": 0, 00:17:04.791 "rw_mbytes_per_sec": 0, 00:17:04.791 "r_mbytes_per_sec": 0, 00:17:04.791 "w_mbytes_per_sec": 0 00:17:04.791 }, 00:17:04.791 "claimed": false, 00:17:04.791 "zoned": false, 00:17:04.791 "supported_io_types": { 00:17:04.791 "read": true, 00:17:04.791 "write": true, 00:17:04.791 "unmap": true, 00:17:04.791 "write_zeroes": true, 00:17:04.791 "flush": false, 00:17:04.791 "reset": true, 00:17:04.791 "compare": false, 00:17:04.791 "compare_and_write": false, 00:17:04.791 "abort": false, 00:17:04.791 "nvme_admin": false, 00:17:04.791 "nvme_io": false 00:17:04.791 }, 00:17:04.791 "driver_specific": { 00:17:04.791 "lvol": { 00:17:04.791 "lvol_store_uuid": "53078330-6434-4d0c-9a59-26d3452ec53b", 00:17:04.791 "base_bdev": "nvme0n1", 00:17:04.791 "thin_provision": true, 00:17:04.791 "snapshot": false, 00:17:04.791 "clone": false, 00:17:04.791 "esnap_clone": false 00:17:04.791 } 00:17:04.791 } 00:17:04.791 } 00:17:04.791 ]' 00:17:04.791 18:03:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:04.791 18:03:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:04.791 18:03:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:04.791 18:03:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:04.791 18:03:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:04.791 18:03:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:04.791 18:03:21 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:04.791 18:03:21 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d5f989e1-96e1-4e7c-b6b2-f91716f81b4c --l2p_dram_limit 10' 00:17:04.791 18:03:21 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:04.791 18:03:21 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:04.791 18:03:21 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:04.791 18:03:21 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:04.791 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:04.791 18:03:21 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d5f989e1-96e1-4e7c-b6b2-f91716f81b4c --l2p_dram_limit 10 -c nvc0n1p0 00:17:05.051 [2024-11-26 18:03:21.830742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.830802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.051 [2024-11-26 18:03:21.830823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.051 [2024-11-26 18:03:21.830834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.830920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.830933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.051 [2024-11-26 18:03:21.830950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:05.051 [2024-11-26 18:03:21.830972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.831004] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.051 [2024-11-26 18:03:21.831300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.051 [2024-11-26 18:03:21.831324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.831335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.051 [2024-11-26 18:03:21.831350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:05.051 [2024-11-26 18:03:21.831360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.831402] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6a8373d9-959b-4681-90e1-ca6922fae046 00:17:05.051 [2024-11-26 18:03:21.832841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.833038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:05.051 [2024-11-26 18:03:21.833061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:05.051 [2024-11-26 18:03:21.833074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.840532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.840568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.051 [2024-11-26 18:03:21.840581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.375 ms 00:17:05.051 [2024-11-26 18:03:21.840596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.840699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.840717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.051 [2024-11-26 18:03:21.840727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:05.051 [2024-11-26 18:03:21.840749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.840809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.840823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.051 [2024-11-26 18:03:21.840834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:05.051 [2024-11-26 18:03:21.840850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.840878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.051 [2024-11-26 18:03:21.842690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.842720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.051 [2024-11-26 18:03:21.842744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.821 ms 00:17:05.051 [2024-11-26 18:03:21.842754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.842795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.842807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.051 [2024-11-26 18:03:21.842826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:05.051 [2024-11-26 18:03:21.842836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.051 [2024-11-26 18:03:21.842874] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:05.051 [2024-11-26 18:03:21.842981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:05.051 [2024-11-26 18:03:21.842998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.051 [2024-11-26 18:03:21.843011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:05.051 [2024-11-26 18:03:21.843032] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.051 [2024-11-26 18:03:21.843044] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.051 [2024-11-26 18:03:21.843058] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:05.051 [2024-11-26 18:03:21.843068] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.051 [2024-11-26 18:03:21.843080] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:05.051 [2024-11-26 18:03:21.843090] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:05.051 [2024-11-26 18:03:21.843102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.051 [2024-11-26 18:03:21.843113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.051 [2024-11-26 18:03:21.843127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:05.052 [2024-11-26 18:03:21.843138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.052 [2024-11-26 18:03:21.843203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.052 [2024-11-26 18:03:21.843214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.052 [2024-11-26 18:03:21.843227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:05.052 [2024-11-26 18:03:21.843236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.052 [2024-11-26 18:03:21.843312] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.052 [2024-11-26 18:03:21.843330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.052 [2024-11-26 18:03:21.843343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.052 [2024-11-26 18:03:21.843377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.052 [2024-11-26 18:03:21.843410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.052 [2024-11-26 18:03:21.843431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.052 [2024-11-26 18:03:21.843441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:05.052 [2024-11-26 18:03:21.843479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.052 [2024-11-26 18:03:21.843489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.052 [2024-11-26 18:03:21.843501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:05.052 [2024-11-26 18:03:21.843512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.052 [2024-11-26 18:03:21.843533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:05.052 [2024-11-26 18:03:21.843545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:05.052 [2024-11-26 18:03:21.843566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:05.052 [2024-11-26 18:03:21.843576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.052 [2024-11-26 18:03:21.843596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.052 [2024-11-26 18:03:21.843629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.052 [2024-11-26 18:03:21.843660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.052 [2024-11-26 18:03:21.843694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.052 [2024-11-26 18:03:21.843723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.052 [2024-11-26 18:03:21.843744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.052 [2024-11-26 18:03:21.843756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:05.052 [2024-11-26 18:03:21.843765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.052 [2024-11-26 18:03:21.843776] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.052 [2024-11-26 18:03:21.843786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.052 [2024-11-26 18:03:21.843801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.052 [2024-11-26 18:03:21.843826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.052 [2024-11-26 18:03:21.843835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.052 [2024-11-26 18:03:21.843847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.052 [2024-11-26 18:03:21.843867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.052 [2024-11-26 18:03:21.843879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.052 [2024-11-26 18:03:21.843888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.052 [2024-11-26 18:03:21.843901] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.052 [2024-11-26 18:03:21.843914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.052 [2024-11-26 18:03:21.843937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:05.052 [2024-11-26 18:03:21.843947] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:05.052 [2024-11-26 18:03:21.843960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:05.052 [2024-11-26 18:03:21.843971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:05.052 [2024-11-26 18:03:21.843984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:05.052 [2024-11-26 18:03:21.843994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:05.052 [2024-11-26 18:03:21.844007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:05.052 [2024-11-26 18:03:21.844017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:05.052 [2024-11-26 18:03:21.844032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:05.052 [2024-11-26 18:03:21.844042] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:05.052 [2024-11-26 18:03:21.844055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:05.052 [2024-11-26 18:03:21.844066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:05.052 [2024-11-26 18:03:21.844079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:05.052 [2024-11-26 18:03:21.844088] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.052 [2024-11-26 18:03:21.844102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.052 [2024-11-26 18:03:21.844113] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.052 [2024-11-26 18:03:21.844126] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.052 [2024-11-26 18:03:21.844136] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.053 [2024-11-26 18:03:21.844149] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.053 [2024-11-26 18:03:21.844160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.844172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.053 [2024-11-26 18:03:21.844182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:17:05.053 [2024-11-26 18:03:21.844195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.852680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.852837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.053 [2024-11-26 18:03:21.852955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.450 ms 00:17:05.053 [2024-11-26 18:03:21.852997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.853103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.853187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.053 [2024-11-26 18:03:21.853223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:05.053 [2024-11-26 18:03:21.853255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.865022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.865183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.053 [2024-11-26 18:03:21.865309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.664 ms 00:17:05.053 [2024-11-26 18:03:21.865360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.865418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.865463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.053 [2024-11-26 18:03:21.865497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.053 [2024-11-26 18:03:21.865579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.866091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.866158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.053 [2024-11-26 18:03:21.866239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:05.053 [2024-11-26 18:03:21.866282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.866412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.866511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.053 [2024-11-26 18:03:21.866590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:05.053 [2024-11-26 18:03:21.866629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.873823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.873971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.053 [2024-11-26 18:03:21.874095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.131 ms 00:17:05.053 [2024-11-26 18:03:21.874135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.881859] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:05.053 [2024-11-26 18:03:21.885137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.885260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.053 [2024-11-26 18:03:21.885387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.913 ms 00:17:05.053 [2024-11-26 18:03:21.885431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.961861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.053 [2024-11-26 18:03:21.961926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:05.053 [2024-11-26 18:03:21.961945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.477 ms 00:17:05.053 [2024-11-26 18:03:21.961956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.053 [2024-11-26 18:03:21.962007] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:05.053 [2024-11-26 18:03:21.962030] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:09.247 [2024-11-26 18:03:25.697690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.697931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:09.247 [2024-11-26 18:03:25.697963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3741.739 ms 00:17:09.247 [2024-11-26 18:03:25.697974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.698182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.698196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.247 [2024-11-26 18:03:25.698210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:17:09.247 [2024-11-26 18:03:25.698230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.702021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.702072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:09.247 [2024-11-26 18:03:25.702094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:17:09.247 [2024-11-26 18:03:25.702105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.704945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.705097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:09.247 [2024-11-26 18:03:25.705126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:17:09.247 [2024-11-26 18:03:25.705136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.705304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.705317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.247 [2024-11-26 18:03:25.705331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:09.247 [2024-11-26 18:03:25.705341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.732823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.732887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:09.247 [2024-11-26 18:03:25.732908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.495 ms 00:17:09.247 [2024-11-26 18:03:25.732920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.737843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.737892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:09.247 [2024-11-26 18:03:25.737914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:17:09.247 [2024-11-26 18:03:25.737935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.740307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.740463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:09.247 [2024-11-26 18:03:25.740488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:17:09.247 [2024-11-26 18:03:25.740499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.744226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.744385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.247 [2024-11-26 18:03:25.744412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:17:09.247 [2024-11-26 18:03:25.744423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.744486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.744499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.247 [2024-11-26 18:03:25.744517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:09.247 [2024-11-26 18:03:25.744527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.744621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:25.744633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.247 [2024-11-26 18:03:25.744650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:09.247 [2024-11-26 18:03:25.744661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:25.745727] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3920.915 ms, result 0 00:17:09.247 { 00:17:09.247 "name": "ftl0", 00:17:09.247 "uuid": "6a8373d9-959b-4681-90e1-ca6922fae046" 00:17:09.247 } 00:17:09.247 18:03:25 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:09.247 18:03:25 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:09.247 18:03:25 -- ftl/restore.sh@63 -- # echo ']}' 00:17:09.247 18:03:25 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:09.247 [2024-11-26 18:03:26.169936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.247 [2024-11-26 18:03:26.170186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.247 [2024-11-26 18:03:26.170395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.247 [2024-11-26 18:03:26.170441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.247 [2024-11-26 18:03:26.170538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.507 [2024-11-26 18:03:26.171354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.171467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.507 [2024-11-26 18:03:26.171556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:17:09.507 [2024-11-26 18:03:26.171591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.171855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.171902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.507 [2024-11-26 18:03:26.171939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:17:09.507 [2024-11-26 18:03:26.172013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.174616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.174726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.507 [2024-11-26 18:03:26.174797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.501 ms 00:17:09.507 [2024-11-26 18:03:26.174835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.179848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.179976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:09.507 [2024-11-26 18:03:26.180050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.971 ms 00:17:09.507 [2024-11-26 18:03:26.180092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.181856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.181977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.507 [2024-11-26 18:03:26.182050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:17:09.507 [2024-11-26 18:03:26.182084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.186919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.187071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.507 [2024-11-26 18:03:26.187153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:17:09.507 [2024-11-26 18:03:26.187192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.507 [2024-11-26 18:03:26.187345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.507 [2024-11-26 18:03:26.187495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.507 [2024-11-26 18:03:26.187517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:09.508 [2024-11-26 18:03:26.187541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.508 [2024-11-26 18:03:26.189075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.508 [2024-11-26 18:03:26.189105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:09.508 [2024-11-26 18:03:26.189120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:17:09.508 [2024-11-26 18:03:26.189131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.508 [2024-11-26 18:03:26.190689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.508 [2024-11-26 18:03:26.190723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:09.508 [2024-11-26 18:03:26.190738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:17:09.508 [2024-11-26 18:03:26.190748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.508 [2024-11-26 18:03:26.192010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.508 [2024-11-26 18:03:26.192044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.508 [2024-11-26 18:03:26.192059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:17:09.508 [2024-11-26 18:03:26.192069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.508 [2024-11-26 18:03:26.193204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.508 [2024-11-26 18:03:26.193240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.508 [2024-11-26 18:03:26.193255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:17:09.508 [2024-11-26 18:03:26.193265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.508 [2024-11-26 18:03:26.193311] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.508 [2024-11-26 18:03:26.193329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.193997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.194009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.194022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.194033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.194047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.508 [2024-11-26 18:03:26.194058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.194976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.509 [2024-11-26 18:03:26.195827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.509 [2024-11-26 18:03:26.195869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8373d9-959b-4681-90e1-ca6922fae046 00:17:09.509 [2024-11-26 18:03:26.195919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.509 [2024-11-26 18:03:26.195953] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.509 [2024-11-26 18:03:26.195984] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.509 [2024-11-26 18:03:26.196058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.509 [2024-11-26 18:03:26.196094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.509 [2024-11-26 18:03:26.196144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.509 [2024-11-26 18:03:26.196175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.509 [2024-11-26 18:03:26.196208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.509 [2024-11-26 18:03:26.196238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.509 [2024-11-26 18:03:26.196273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.509 [2024-11-26 18:03:26.196349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.509 [2024-11-26 18:03:26.196390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:17:09.509 [2024-11-26 18:03:26.196402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.509 [2024-11-26 18:03:26.198354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.509 [2024-11-26 18:03:26.198379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.509 [2024-11-26 18:03:26.198395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:17:09.509 [2024-11-26 18:03:26.198414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.509 [2024-11-26 18:03:26.198542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.509 [2024-11-26 18:03:26.198557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.509 [2024-11-26 18:03:26.198576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:09.509 [2024-11-26 18:03:26.198586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.509 [2024-11-26 18:03:26.205875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.509 [2024-11-26 18:03:26.206049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.509 [2024-11-26 18:03:26.206140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.509 [2024-11-26 18:03:26.206190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.509 [2024-11-26 18:03:26.206280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.509 [2024-11-26 18:03:26.206313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.509 [2024-11-26 18:03:26.206350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.509 [2024-11-26 18:03:26.206380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.509 [2024-11-26 18:03:26.206553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.509 [2024-11-26 18:03:26.206599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.509 [2024-11-26 18:03:26.206644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.206674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.206747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.206832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.510 [2024-11-26 18:03:26.206873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.206905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.221592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.221848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.510 [2024-11-26 18:03:26.221968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.222008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.227281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.227438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.510 [2024-11-26 18:03:26.227576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.227597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.227691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.227704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.510 [2024-11-26 18:03:26.227717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.227727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.227768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.227780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.510 [2024-11-26 18:03:26.227793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.227803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.227897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.227910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.510 [2024-11-26 18:03:26.227923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.227941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.227982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.227995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.510 [2024-11-26 18:03:26.228008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.228018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.228063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.228074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.510 [2024-11-26 18:03:26.228087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.228097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.228147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.510 [2024-11-26 18:03:26.228158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.510 [2024-11-26 18:03:26.228171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.510 [2024-11-26 18:03:26.228181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.510 [2024-11-26 18:03:26.228326] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.437 ms, result 0 00:17:09.510 true 00:17:09.510 18:03:26 -- ftl/restore.sh@66 -- # killprocess 84090 00:17:09.510 18:03:26 -- common/autotest_common.sh@936 -- # '[' -z 84090 ']' 00:17:09.510 18:03:26 -- common/autotest_common.sh@940 -- # kill -0 84090 00:17:09.510 18:03:26 -- common/autotest_common.sh@941 -- # uname 00:17:09.510 18:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.510 18:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84090 00:17:09.510 killing process with pid 84090 00:17:09.510 18:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.510 18:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.510 18:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84090' 00:17:09.510 18:03:26 -- common/autotest_common.sh@955 -- # kill 84090 00:17:09.510 18:03:26 -- common/autotest_common.sh@960 -- # wait 84090 00:17:12.800 18:03:29 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:16.992 262144+0 records in 00:17:16.992 262144+0 records out 00:17:16.992 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.096 s, 262 MB/s 00:17:16.992 18:03:33 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:18.370 18:03:34 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.370 [2024-11-26 18:03:35.054733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:18.370 [2024-11-26 18:03:35.054917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84310 ] 00:17:18.370 [2024-11-26 18:03:35.207191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.370 [2024-11-26 18:03:35.259157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.629 [2024-11-26 18:03:35.365801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.629 [2024-11-26 18:03:35.365919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.629 [2024-11-26 18:03:35.517986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.518060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.629 [2024-11-26 18:03:35.518076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.629 [2024-11-26 18:03:35.518086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.518175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.518206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.629 [2024-11-26 18:03:35.518217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:18.629 [2024-11-26 18:03:35.518228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.518270] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.629 [2024-11-26 18:03:35.518613] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.629 [2024-11-26 18:03:35.518637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.518651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.629 [2024-11-26 18:03:35.518663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:18.629 [2024-11-26 18:03:35.518673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.520388] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.629 [2024-11-26 18:03:35.523173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.523228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.629 [2024-11-26 18:03:35.523255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:17:18.629 [2024-11-26 18:03:35.523270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.523361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.523374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.629 [2024-11-26 18:03:35.523385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:18.629 [2024-11-26 18:03:35.523395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.530801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.530848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.629 [2024-11-26 18:03:35.530861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.358 ms 00:17:18.629 [2024-11-26 18:03:35.530871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-11-26 18:03:35.530967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-11-26 18:03:35.530983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.630 [2024-11-26 18:03:35.530994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:18.630 [2024-11-26 18:03:35.531004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.531076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.630 [2024-11-26 18:03:35.531096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.630 [2024-11-26 18:03:35.531107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:18.630 [2024-11-26 18:03:35.531120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.531152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.630 [2024-11-26 18:03:35.533029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.630 [2024-11-26 18:03:35.533069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.630 [2024-11-26 18:03:35.533081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:17:18.630 [2024-11-26 18:03:35.533090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.533133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.630 [2024-11-26 18:03:35.533144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.630 [2024-11-26 18:03:35.533155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:18.630 [2024-11-26 18:03:35.533168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.533194] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:18.630 [2024-11-26 18:03:35.533216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:18.630 [2024-11-26 18:03:35.533251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:18.630 [2024-11-26 18:03:35.533269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:18.630 [2024-11-26 18:03:35.533353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:18.630 [2024-11-26 18:03:35.533367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.630 [2024-11-26 18:03:35.533384] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:18.630 [2024-11-26 18:03:35.533402] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533418] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533430] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:18.630 [2024-11-26 18:03:35.533441] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.630 [2024-11-26 18:03:35.533459] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:18.630 [2024-11-26 18:03:35.533488] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:18.630 [2024-11-26 18:03:35.533507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.630 [2024-11-26 18:03:35.533526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.630 [2024-11-26 18:03:35.533537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:17:18.630 [2024-11-26 18:03:35.533560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.533619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.630 [2024-11-26 18:03:35.533632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.630 [2024-11-26 18:03:35.533643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:18.630 [2024-11-26 18:03:35.533653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.630 [2024-11-26 18:03:35.533724] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.630 [2024-11-26 18:03:35.533736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.630 [2024-11-26 18:03:35.533747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.630 [2024-11-26 18:03:35.533789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.630 [2024-11-26 18:03:35.533818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.630 [2024-11-26 18:03:35.533837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.630 [2024-11-26 18:03:35.533848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:18.630 [2024-11-26 18:03:35.533858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.630 [2024-11-26 18:03:35.533868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.630 [2024-11-26 18:03:35.533877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:18.630 [2024-11-26 18:03:35.533887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.630 [2024-11-26 18:03:35.533905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:18.630 [2024-11-26 18:03:35.533915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:18.630 [2024-11-26 18:03:35.533938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:18.630 [2024-11-26 18:03:35.533947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.630 [2024-11-26 18:03:35.533967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:18.630 [2024-11-26 18:03:35.533976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.630 [2024-11-26 18:03:35.533986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.630 [2024-11-26 18:03:35.533995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:18.630 [2024-11-26 18:03:35.534004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.630 [2024-11-26 18:03:35.534014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.630 [2024-11-26 18:03:35.534023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:18.630 [2024-11-26 18:03:35.534032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.630 [2024-11-26 18:03:35.534042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.630 [2024-11-26 18:03:35.534051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:18.630 [2024-11-26 18:03:35.534060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.630 [2024-11-26 18:03:35.534069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.630 [2024-11-26 18:03:35.534085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:18.630 [2024-11-26 18:03:35.534095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.630 [2024-11-26 18:03:35.534104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.630 [2024-11-26 18:03:35.534113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:18.630 [2024-11-26 18:03:35.534122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.630 [2024-11-26 18:03:35.534131] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.630 [2024-11-26 18:03:35.534145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.630 [2024-11-26 18:03:35.534176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.630 [2024-11-26 18:03:35.534186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.630 [2024-11-26 18:03:35.534197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.630 [2024-11-26 18:03:35.534207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.630 [2024-11-26 18:03:35.534217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.630 [2024-11-26 18:03:35.534226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.630 [2024-11-26 18:03:35.534235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.630 [2024-11-26 18:03:35.534245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.630 [2024-11-26 18:03:35.534256] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.630 [2024-11-26 18:03:35.534272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.630 [2024-11-26 18:03:35.534284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:18.630 [2024-11-26 18:03:35.534295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:18.630 [2024-11-26 18:03:35.534306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:18.630 [2024-11-26 18:03:35.534316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:18.630 [2024-11-26 18:03:35.534327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:18.630 [2024-11-26 18:03:35.534338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:18.630 [2024-11-26 18:03:35.534348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:18.630 [2024-11-26 18:03:35.534359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:18.630 [2024-11-26 18:03:35.534369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:18.630 [2024-11-26 18:03:35.534379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:18.630 [2024-11-26 18:03:35.534390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:18.630 [2024-11-26 18:03:35.534400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:18.631 [2024-11-26 18:03:35.534412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:18.631 [2024-11-26 18:03:35.534422] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.631 [2024-11-26 18:03:35.534434] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.631 [2024-11-26 18:03:35.534448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.631 [2024-11-26 18:03:35.534508] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.631 [2024-11-26 18:03:35.534519] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.631 [2024-11-26 18:03:35.534531] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.631 [2024-11-26 18:03:35.534542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-11-26 18:03:35.534552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.631 [2024-11-26 18:03:35.534563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:17:18.631 [2024-11-26 18:03:35.534587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-11-26 18:03:35.543766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-11-26 18:03:35.543823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.631 [2024-11-26 18:03:35.543837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.143 ms 00:17:18.631 [2024-11-26 18:03:35.543849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-11-26 18:03:35.543947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-11-26 18:03:35.543959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.631 [2024-11-26 18:03:35.543971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:18.631 [2024-11-26 18:03:35.543980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.562734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.562801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.890 [2024-11-26 18:03:35.562821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.704 ms 00:17:18.890 [2024-11-26 18:03:35.562835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.562900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.562914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.890 [2024-11-26 18:03:35.562934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.890 [2024-11-26 18:03:35.562952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.563533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.563559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.890 [2024-11-26 18:03:35.563574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:17:18.890 [2024-11-26 18:03:35.563587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.563739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.563758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.890 [2024-11-26 18:03:35.563772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:18.890 [2024-11-26 18:03:35.563784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.572120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.572186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.890 [2024-11-26 18:03:35.572201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.313 ms 00:17:18.890 [2024-11-26 18:03:35.572212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.575279] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:18.890 [2024-11-26 18:03:35.575332] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:18.890 [2024-11-26 18:03:35.575347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.575357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:18.890 [2024-11-26 18:03:35.575370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:17:18.890 [2024-11-26 18:03:35.575381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.588648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.588737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:18.890 [2024-11-26 18:03:35.588753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.234 ms 00:17:18.890 [2024-11-26 18:03:35.588763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.591689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.591734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:18.890 [2024-11-26 18:03:35.591765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:17:18.890 [2024-11-26 18:03:35.591775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.593266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.593302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:18.890 [2024-11-26 18:03:35.593321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:17:18.890 [2024-11-26 18:03:35.593332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.593577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.593602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.890 [2024-11-26 18:03:35.593614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:17:18.890 [2024-11-26 18:03:35.593625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.620295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.620380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.890 [2024-11-26 18:03:35.620403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.692 ms 00:17:18.890 [2024-11-26 18:03:35.620414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.628525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:18.890 [2024-11-26 18:03:35.632318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.632385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.890 [2024-11-26 18:03:35.632400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.831 ms 00:17:18.890 [2024-11-26 18:03:35.632420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.632570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.632585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.890 [2024-11-26 18:03:35.632597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.890 [2024-11-26 18:03:35.632608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.632701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.632716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.890 [2024-11-26 18:03:35.632732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:18.890 [2024-11-26 18:03:35.632750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.635441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.635511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:18.890 [2024-11-26 18:03:35.635525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:17:18.890 [2024-11-26 18:03:35.635545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.890 [2024-11-26 18:03:35.635596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.890 [2024-11-26 18:03:35.635608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.890 [2024-11-26 18:03:35.635620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.891 [2024-11-26 18:03:35.635630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.891 [2024-11-26 18:03:35.635677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.891 [2024-11-26 18:03:35.635690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.891 [2024-11-26 18:03:35.635715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.891 [2024-11-26 18:03:35.635734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:18.891 [2024-11-26 18:03:35.635752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.891 [2024-11-26 18:03:35.639728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.891 [2024-11-26 18:03:35.639777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.891 [2024-11-26 18:03:35.639792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:17:18.891 [2024-11-26 18:03:35.639803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.891 [2024-11-26 18:03:35.639885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.891 [2024-11-26 18:03:35.639899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.891 [2024-11-26 18:03:35.639910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:18.891 [2024-11-26 18:03:35.639926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.891 [2024-11-26 18:03:35.641148] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.912 ms, result 0 00:17:19.827  [2024-11-26T18:03:37.691Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T18:03:39.067Z] Copying: 58/1024 [MB] (30 MBps) [2024-11-26T18:03:39.649Z] Copying: 86/1024 [MB] (28 MBps) [2024-11-26T18:03:41.023Z] Copying: 114/1024 [MB] (28 MBps) [2024-11-26T18:03:41.957Z] Copying: 144/1024 [MB] (29 MBps) [2024-11-26T18:03:42.892Z] Copying: 172/1024 [MB] (28 MBps) [2024-11-26T18:03:43.874Z] Copying: 202/1024 [MB] (29 MBps) [2024-11-26T18:03:44.932Z] Copying: 230/1024 [MB] (28 MBps) [2024-11-26T18:03:45.862Z] Copying: 258/1024 [MB] (27 MBps) [2024-11-26T18:03:46.796Z] Copying: 287/1024 [MB] (28 MBps) [2024-11-26T18:03:47.728Z] Copying: 316/1024 [MB] (28 MBps) [2024-11-26T18:03:48.659Z] Copying: 346/1024 [MB] (30 MBps) [2024-11-26T18:03:50.035Z] Copying: 376/1024 [MB] (30 MBps) [2024-11-26T18:03:50.973Z] Copying: 409/1024 [MB] (33 MBps) [2024-11-26T18:03:51.912Z] Copying: 441/1024 [MB] (31 MBps) [2024-11-26T18:03:52.888Z] Copying: 471/1024 [MB] (29 MBps) [2024-11-26T18:03:53.826Z] Copying: 500/1024 [MB] (28 MBps) [2024-11-26T18:03:54.761Z] Copying: 529/1024 [MB] (29 MBps) [2024-11-26T18:03:55.700Z] Copying: 557/1024 [MB] (28 MBps) [2024-11-26T18:03:56.685Z] Copying: 583/1024 [MB] (26 MBps) [2024-11-26T18:03:57.622Z] Copying: 608/1024 [MB] (25 MBps) [2024-11-26T18:03:58.996Z] Copying: 635/1024 [MB] (26 MBps) [2024-11-26T18:03:59.933Z] Copying: 661/1024 [MB] (25 MBps) [2024-11-26T18:04:00.870Z] Copying: 687/1024 [MB] (26 MBps) [2024-11-26T18:04:01.876Z] Copying: 713/1024 [MB] (26 MBps) [2024-11-26T18:04:02.810Z] Copying: 740/1024 [MB] (27 MBps) [2024-11-26T18:04:03.747Z] Copying: 767/1024 [MB] (27 MBps) [2024-11-26T18:04:04.683Z] Copying: 794/1024 [MB] (26 MBps) [2024-11-26T18:04:05.622Z] Copying: 822/1024 [MB] (28 MBps) [2024-11-26T18:04:07.040Z] Copying: 850/1024 [MB] (27 MBps) [2024-11-26T18:04:07.609Z] Copying: 878/1024 [MB] (28 MBps) [2024-11-26T18:04:08.990Z] Copying: 907/1024 [MB] (28 MBps) [2024-11-26T18:04:09.928Z] Copying: 934/1024 [MB] (27 MBps) [2024-11-26T18:04:10.903Z] Copying: 962/1024 [MB] (27 MBps) [2024-11-26T18:04:11.840Z] Copying: 992/1024 [MB] (29 MBps) [2024-11-26T18:04:11.840Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 18:04:11.581300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.914 [2024-11-26 18:04:11.581372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.914 [2024-11-26 18:04:11.581399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.914 [2024-11-26 18:04:11.581411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.914 [2024-11-26 18:04:11.581447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.914 [2024-11-26 18:04:11.582232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.914 [2024-11-26 18:04:11.582264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.914 [2024-11-26 18:04:11.582277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:17:54.914 [2024-11-26 18:04:11.582288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.914 [2024-11-26 18:04:11.584009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.914 [2024-11-26 18:04:11.584052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.914 [2024-11-26 18:04:11.584066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:17:54.914 [2024-11-26 18:04:11.584078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.601331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.601401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.915 [2024-11-26 18:04:11.601416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:17:54.915 [2024-11-26 18:04:11.601439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.606694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.606744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:54.915 [2024-11-26 18:04:11.606758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:17:54.915 [2024-11-26 18:04:11.606770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.608742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.608786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.915 [2024-11-26 18:04:11.608799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:17:54.915 [2024-11-26 18:04:11.608810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.612747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.612792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.915 [2024-11-26 18:04:11.612816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.913 ms 00:17:54.915 [2024-11-26 18:04:11.612836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.612946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.612959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.915 [2024-11-26 18:04:11.612970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:54.915 [2024-11-26 18:04:11.612980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.615090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.615134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:54.915 [2024-11-26 18:04:11.615163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:17:54.915 [2024-11-26 18:04:11.615174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.616661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.616694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:54.915 [2024-11-26 18:04:11.616707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:17:54.915 [2024-11-26 18:04:11.616734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.617850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.617884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.915 [2024-11-26 18:04:11.617895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:17:54.915 [2024-11-26 18:04:11.617905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.618964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.915 [2024-11-26 18:04:11.619001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.915 [2024-11-26 18:04:11.619012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:17:54.915 [2024-11-26 18:04:11.619022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.915 [2024-11-26 18:04:11.619062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.915 [2024-11-26 18:04:11.619101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.915 [2024-11-26 18:04:11.619774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.619997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.916 [2024-11-26 18:04:11.620270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.916 [2024-11-26 18:04:11.620284] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8373d9-959b-4681-90e1-ca6922fae046 00:17:54.916 [2024-11-26 18:04:11.620304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.916 [2024-11-26 18:04:11.620323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.916 [2024-11-26 18:04:11.620333] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.916 [2024-11-26 18:04:11.620344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.916 [2024-11-26 18:04:11.620355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.916 [2024-11-26 18:04:11.620365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.916 [2024-11-26 18:04:11.620375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.916 [2024-11-26 18:04:11.620385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.916 [2024-11-26 18:04:11.620394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.916 [2024-11-26 18:04:11.620405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.916 [2024-11-26 18:04:11.620415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.916 [2024-11-26 18:04:11.620427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:17:54.916 [2024-11-26 18:04:11.620436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.622447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.916 [2024-11-26 18:04:11.622487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.916 [2024-11-26 18:04:11.622500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:17:54.916 [2024-11-26 18:04:11.622511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.622595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.916 [2024-11-26 18:04:11.622608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.916 [2024-11-26 18:04:11.622620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:54.916 [2024-11-26 18:04:11.622634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.629794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.629842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.916 [2024-11-26 18:04:11.629855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.629867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.629918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.629939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.916 [2024-11-26 18:04:11.629950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.629972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.630066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.630088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.916 [2024-11-26 18:04:11.630099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.630109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.630128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.630140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.916 [2024-11-26 18:04:11.630150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.630183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.644120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.644183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.916 [2024-11-26 18:04:11.644198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.644210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.649883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.649924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.916 [2024-11-26 18:04:11.649954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.649964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.650042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.650055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.916 [2024-11-26 18:04:11.650067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.650077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.650110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.650122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.916 [2024-11-26 18:04:11.650133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.916 [2024-11-26 18:04:11.650142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.916 [2024-11-26 18:04:11.650244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.916 [2024-11-26 18:04:11.650263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.917 [2024-11-26 18:04:11.650274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.917 [2024-11-26 18:04:11.650284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.917 [2024-11-26 18:04:11.650320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.917 [2024-11-26 18:04:11.650333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.917 [2024-11-26 18:04:11.650343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.917 [2024-11-26 18:04:11.650353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.917 [2024-11-26 18:04:11.650408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.917 [2024-11-26 18:04:11.650424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.917 [2024-11-26 18:04:11.650435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.917 [2024-11-26 18:04:11.650445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.917 [2024-11-26 18:04:11.650504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.917 [2024-11-26 18:04:11.650517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.917 [2024-11-26 18:04:11.650537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.917 [2024-11-26 18:04:11.650547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.917 [2024-11-26 18:04:11.650703] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.455 ms, result 0 00:17:55.484 00:17:55.484 00:17:55.484 18:04:12 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:55.484 [2024-11-26 18:04:12.339301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:55.484 [2024-11-26 18:04:12.339445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84695 ] 00:17:55.743 [2024-11-26 18:04:12.489143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.743 [2024-11-26 18:04:12.537323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.743 [2024-11-26 18:04:12.639915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.743 [2024-11-26 18:04:12.640008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.003 [2024-11-26 18:04:12.791717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.791785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.003 [2024-11-26 18:04:12.791808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:56.003 [2024-11-26 18:04:12.791819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.791894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.791908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.003 [2024-11-26 18:04:12.791918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:56.003 [2024-11-26 18:04:12.791935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.791961] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.003 [2024-11-26 18:04:12.792243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.003 [2024-11-26 18:04:12.792264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.792278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.003 [2024-11-26 18:04:12.792289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:56.003 [2024-11-26 18:04:12.792299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.793846] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:56.003 [2024-11-26 18:04:12.796599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.796638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:56.003 [2024-11-26 18:04:12.796651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:17:56.003 [2024-11-26 18:04:12.796667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.796725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.796738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:56.003 [2024-11-26 18:04:12.796749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:56.003 [2024-11-26 18:04:12.796767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.803610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.803647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.003 [2024-11-26 18:04:12.803659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:17:56.003 [2024-11-26 18:04:12.803669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.803753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.803767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.003 [2024-11-26 18:04:12.803784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:56.003 [2024-11-26 18:04:12.803794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.803857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.803868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.003 [2024-11-26 18:04:12.803879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:56.003 [2024-11-26 18:04:12.803899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.803934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.003 [2024-11-26 18:04:12.805619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.805648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.003 [2024-11-26 18:04:12.805660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:17:56.003 [2024-11-26 18:04:12.805670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.805702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.003 [2024-11-26 18:04:12.805713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.003 [2024-11-26 18:04:12.805724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:56.003 [2024-11-26 18:04:12.805743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.003 [2024-11-26 18:04:12.805765] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:56.003 [2024-11-26 18:04:12.805788] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:56.004 [2024-11-26 18:04:12.805827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:56.004 [2024-11-26 18:04:12.805847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:56.004 [2024-11-26 18:04:12.805914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:56.004 [2024-11-26 18:04:12.805926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.004 [2024-11-26 18:04:12.805948] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:56.004 [2024-11-26 18:04:12.805964] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.004 [2024-11-26 18:04:12.805979] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.004 [2024-11-26 18:04:12.805990] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:56.004 [2024-11-26 18:04:12.806000] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.004 [2024-11-26 18:04:12.806010] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:56.004 [2024-11-26 18:04:12.806019] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:56.004 [2024-11-26 18:04:12.806030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.004 [2024-11-26 18:04:12.806040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.004 [2024-11-26 18:04:12.806050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:56.004 [2024-11-26 18:04:12.806062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-26 18:04:12.806119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.004 [2024-11-26 18:04:12.806130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.004 [2024-11-26 18:04:12.806140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:56.004 [2024-11-26 18:04:12.806150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-26 18:04:12.806225] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.004 [2024-11-26 18:04:12.806237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.004 [2024-11-26 18:04:12.806249] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.004 [2024-11-26 18:04:12.806282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.004 [2024-11-26 18:04:12.806312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.004 [2024-11-26 18:04:12.806330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.004 [2024-11-26 18:04:12.806340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:56.004 [2024-11-26 18:04:12.806348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.004 [2024-11-26 18:04:12.806358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.004 [2024-11-26 18:04:12.806368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:56.004 [2024-11-26 18:04:12.806377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.004 [2024-11-26 18:04:12.806395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:56.004 [2024-11-26 18:04:12.806404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:56.004 [2024-11-26 18:04:12.806425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:56.004 [2024-11-26 18:04:12.806434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.004 [2024-11-26 18:04:12.806463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.004 [2024-11-26 18:04:12.806491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.004 [2024-11-26 18:04:12.806518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.004 [2024-11-26 18:04:12.806545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.004 [2024-11-26 18:04:12.806578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.004 [2024-11-26 18:04:12.806596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.004 [2024-11-26 18:04:12.806604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:56.004 [2024-11-26 18:04:12.806613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.004 [2024-11-26 18:04:12.806623] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.004 [2024-11-26 18:04:12.806637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.004 [2024-11-26 18:04:12.806647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.004 [2024-11-26 18:04:12.806673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.004 [2024-11-26 18:04:12.806682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.004 [2024-11-26 18:04:12.806692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.004 [2024-11-26 18:04:12.806701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.004 [2024-11-26 18:04:12.806710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.004 [2024-11-26 18:04:12.806720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.004 [2024-11-26 18:04:12.806730] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.004 [2024-11-26 18:04:12.806745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.004 [2024-11-26 18:04:12.806757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:56.004 [2024-11-26 18:04:12.806767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:56.004 [2024-11-26 18:04:12.806777] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:56.004 [2024-11-26 18:04:12.806787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:56.004 [2024-11-26 18:04:12.806797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:56.004 [2024-11-26 18:04:12.806807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:56.004 [2024-11-26 18:04:12.806817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:56.004 [2024-11-26 18:04:12.806828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:56.004 [2024-11-26 18:04:12.806838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:56.004 [2024-11-26 18:04:12.806848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:56.004 [2024-11-26 18:04:12.806858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:56.004 [2024-11-26 18:04:12.806868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:56.004 [2024-11-26 18:04:12.806879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:56.004 [2024-11-26 18:04:12.806889] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.004 [2024-11-26 18:04:12.806906] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.004 [2024-11-26 18:04:12.806920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.004 [2024-11-26 18:04:12.806930] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.004 [2024-11-26 18:04:12.806940] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.004 [2024-11-26 18:04:12.806950] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.004 [2024-11-26 18:04:12.806961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.004 [2024-11-26 18:04:12.806971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.004 [2024-11-26 18:04:12.806982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:17:56.004 [2024-11-26 18:04:12.807003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-26 18:04:12.815535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.004 [2024-11-26 18:04:12.815571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.004 [2024-11-26 18:04:12.815584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.502 ms 00:17:56.004 [2024-11-26 18:04:12.815601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-26 18:04:12.815713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.815729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.005 [2024-11-26 18:04:12.815740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:56.005 [2024-11-26 18:04:12.815750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.835621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.835690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.005 [2024-11-26 18:04:12.835709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.837 ms 00:17:56.005 [2024-11-26 18:04:12.835733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.835789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.835804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.005 [2024-11-26 18:04:12.835824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.005 [2024-11-26 18:04:12.835841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.836366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.836393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.005 [2024-11-26 18:04:12.836408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:17:56.005 [2024-11-26 18:04:12.836420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.836579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.836599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.005 [2024-11-26 18:04:12.836613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:56.005 [2024-11-26 18:04:12.836626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.844172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.844221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.005 [2024-11-26 18:04:12.844243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.516 ms 00:17:56.005 [2024-11-26 18:04:12.844253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.846827] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:56.005 [2024-11-26 18:04:12.846869] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:56.005 [2024-11-26 18:04:12.846894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.846905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:56.005 [2024-11-26 18:04:12.846916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:17:56.005 [2024-11-26 18:04:12.846925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.859993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.860059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:56.005 [2024-11-26 18:04:12.860075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.035 ms 00:17:56.005 [2024-11-26 18:04:12.860096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.862147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.862192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:56.005 [2024-11-26 18:04:12.862204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:17:56.005 [2024-11-26 18:04:12.862214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.863715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.863748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:56.005 [2024-11-26 18:04:12.863765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:17:56.005 [2024-11-26 18:04:12.863775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.863961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.863975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.005 [2024-11-26 18:04:12.863986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:56.005 [2024-11-26 18:04:12.863996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.887631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.887699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:56.005 [2024-11-26 18:04:12.887715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.651 ms 00:17:56.005 [2024-11-26 18:04:12.887727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.894203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:56.005 [2024-11-26 18:04:12.897379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.897416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.005 [2024-11-26 18:04:12.897430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.612 ms 00:17:56.005 [2024-11-26 18:04:12.897441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.897551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.897565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:56.005 [2024-11-26 18:04:12.897576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.005 [2024-11-26 18:04:12.897586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.897647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.897662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.005 [2024-11-26 18:04:12.897673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:56.005 [2024-11-26 18:04:12.897683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.899734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.899771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:56.005 [2024-11-26 18:04:12.899788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:17:56.005 [2024-11-26 18:04:12.899805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.899835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.899846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.005 [2024-11-26 18:04:12.899863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.005 [2024-11-26 18:04:12.899876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.899931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:56.005 [2024-11-26 18:04:12.899952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.899965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:56.005 [2024-11-26 18:04:12.899976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:56.005 [2024-11-26 18:04:12.899985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.903627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.903666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.005 [2024-11-26 18:04:12.903679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:17:56.005 [2024-11-26 18:04:12.903695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.903760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.005 [2024-11-26 18:04:12.903779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.005 [2024-11-26 18:04:12.903795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:56.005 [2024-11-26 18:04:12.903804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.005 [2024-11-26 18:04:12.904865] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.887 ms, result 0 00:17:57.396  [2024-11-26T18:04:15.257Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-26T18:04:16.220Z] Copying: 62/1024 [MB] (30 MBps) [2024-11-26T18:04:17.184Z] Copying: 93/1024 [MB] (31 MBps) [2024-11-26T18:04:18.119Z] Copying: 128/1024 [MB] (34 MBps) [2024-11-26T18:04:19.496Z] Copying: 162/1024 [MB] (34 MBps) [2024-11-26T18:04:20.430Z] Copying: 192/1024 [MB] (29 MBps) [2024-11-26T18:04:21.379Z] Copying: 223/1024 [MB] (31 MBps) [2024-11-26T18:04:22.316Z] Copying: 252/1024 [MB] (29 MBps) [2024-11-26T18:04:23.254Z] Copying: 282/1024 [MB] (29 MBps) [2024-11-26T18:04:24.191Z] Copying: 312/1024 [MB] (29 MBps) [2024-11-26T18:04:25.128Z] Copying: 346/1024 [MB] (34 MBps) [2024-11-26T18:04:26.154Z] Copying: 380/1024 [MB] (34 MBps) [2024-11-26T18:04:27.532Z] Copying: 415/1024 [MB] (34 MBps) [2024-11-26T18:04:28.098Z] Copying: 448/1024 [MB] (32 MBps) [2024-11-26T18:04:29.472Z] Copying: 479/1024 [MB] (30 MBps) [2024-11-26T18:04:30.408Z] Copying: 509/1024 [MB] (30 MBps) [2024-11-26T18:04:31.344Z] Copying: 541/1024 [MB] (31 MBps) [2024-11-26T18:04:32.282Z] Copying: 568/1024 [MB] (27 MBps) [2024-11-26T18:04:33.219Z] Copying: 594/1024 [MB] (25 MBps) [2024-11-26T18:04:34.157Z] Copying: 620/1024 [MB] (26 MBps) [2024-11-26T18:04:35.095Z] Copying: 647/1024 [MB] (27 MBps) [2024-11-26T18:04:36.474Z] Copying: 674/1024 [MB] (27 MBps) [2024-11-26T18:04:37.412Z] Copying: 702/1024 [MB] (28 MBps) [2024-11-26T18:04:38.363Z] Copying: 730/1024 [MB] (27 MBps) [2024-11-26T18:04:39.300Z] Copying: 758/1024 [MB] (27 MBps) [2024-11-26T18:04:40.238Z] Copying: 787/1024 [MB] (28 MBps) [2024-11-26T18:04:41.174Z] Copying: 816/1024 [MB] (29 MBps) [2024-11-26T18:04:42.112Z] Copying: 847/1024 [MB] (30 MBps) [2024-11-26T18:04:43.492Z] Copying: 875/1024 [MB] (27 MBps) [2024-11-26T18:04:44.428Z] Copying: 903/1024 [MB] (28 MBps) [2024-11-26T18:04:45.365Z] Copying: 933/1024 [MB] (30 MBps) [2024-11-26T18:04:46.302Z] Copying: 964/1024 [MB] (30 MBps) [2024-11-26T18:04:47.240Z] Copying: 995/1024 [MB] (31 MBps) [2024-11-26T18:04:47.240Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-26T18:04:47.811Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-26 18:04:47.526393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.526509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.885 [2024-11-26 18:04:47.526534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.885 [2024-11-26 18:04:47.526550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.526585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.885 [2024-11-26 18:04:47.527290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.527315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.885 [2024-11-26 18:04:47.527333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:18:30.885 [2024-11-26 18:04:47.527348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.527610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.527635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.885 [2024-11-26 18:04:47.527651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:18:30.885 [2024-11-26 18:04:47.527667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.531571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.531613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.885 [2024-11-26 18:04:47.531640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:18:30.885 [2024-11-26 18:04:47.531656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.539524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.539586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:30.885 [2024-11-26 18:04:47.539605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.833 ms 00:18:30.885 [2024-11-26 18:04:47.539621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.541776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.541828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.885 [2024-11-26 18:04:47.541847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:18:30.885 [2024-11-26 18:04:47.541862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.545517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.545582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.885 [2024-11-26 18:04:47.545601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:18:30.885 [2024-11-26 18:04:47.545618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.545800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.885 [2024-11-26 18:04:47.545827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.885 [2024-11-26 18:04:47.545846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:30.885 [2024-11-26 18:04:47.545861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.885 [2024-11-26 18:04:47.547788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.886 [2024-11-26 18:04:47.547838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:30.886 [2024-11-26 18:04:47.547855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:18:30.886 [2024-11-26 18:04:47.547871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.886 [2024-11-26 18:04:47.549286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.886 [2024-11-26 18:04:47.549330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:30.886 [2024-11-26 18:04:47.549346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:18:30.886 [2024-11-26 18:04:47.549361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.886 [2024-11-26 18:04:47.550635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.886 [2024-11-26 18:04:47.550697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.886 [2024-11-26 18:04:47.550720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:18:30.886 [2024-11-26 18:04:47.550740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.886 [2024-11-26 18:04:47.552039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.886 [2024-11-26 18:04:47.552089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.886 [2024-11-26 18:04:47.552100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:18:30.886 [2024-11-26 18:04:47.552110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.886 [2024-11-26 18:04:47.552136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.886 [2024-11-26 18:04:47.552153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.552992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.886 [2024-11-26 18:04:47.553601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.887 [2024-11-26 18:04:47.553891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.887 [2024-11-26 18:04:47.553907] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8373d9-959b-4681-90e1-ca6922fae046 00:18:30.887 [2024-11-26 18:04:47.553917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.887 [2024-11-26 18:04:47.553927] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.887 [2024-11-26 18:04:47.553936] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.887 [2024-11-26 18:04:47.553955] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.887 [2024-11-26 18:04:47.553972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.887 [2024-11-26 18:04:47.553982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.887 [2024-11-26 18:04:47.553992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.887 [2024-11-26 18:04:47.554001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.887 [2024-11-26 18:04:47.554009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.887 [2024-11-26 18:04:47.554020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.887 [2024-11-26 18:04:47.554030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.887 [2024-11-26 18:04:47.554041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:18:30.887 [2024-11-26 18:04:47.554058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.555823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.887 [2024-11-26 18:04:47.555848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.887 [2024-11-26 18:04:47.555860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:18:30.887 [2024-11-26 18:04:47.555870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.555939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.887 [2024-11-26 18:04:47.555959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.887 [2024-11-26 18:04:47.555975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:30.887 [2024-11-26 18:04:47.555984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.562834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.562878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.887 [2024-11-26 18:04:47.562891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.562902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.562957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.562968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.887 [2024-11-26 18:04:47.562984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.562994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.563090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.563103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.887 [2024-11-26 18:04:47.563123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.563133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.563151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.563161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.887 [2024-11-26 18:04:47.563171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.563192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.577003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.577072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.887 [2024-11-26 18:04:47.577087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.577097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.582741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.582791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.887 [2024-11-26 18:04:47.582804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.582822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.582897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.582910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.887 [2024-11-26 18:04:47.582921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.582931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.582964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.582976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.887 [2024-11-26 18:04:47.582986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.582995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.583080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.583093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.887 [2024-11-26 18:04:47.583112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.583122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.583174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.583187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.887 [2024-11-26 18:04:47.583198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.583208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.583251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.583263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.887 [2024-11-26 18:04:47.583274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.583284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.583329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.887 [2024-11-26 18:04:47.583341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.887 [2024-11-26 18:04:47.583352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.887 [2024-11-26 18:04:47.583362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.887 [2024-11-26 18:04:47.583541] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.163 ms, result 0 00:18:31.147 00:18:31.147 00:18:31.147 18:04:47 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:33.058 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:33.058 18:04:49 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:33.058 [2024-11-26 18:04:49.686116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:33.058 [2024-11-26 18:04:49.686534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85079 ] 00:18:33.058 [2024-11-26 18:04:49.829695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.058 [2024-11-26 18:04:49.896537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.318 [2024-11-26 18:04:50.009187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.318 [2024-11-26 18:04:50.009277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.318 [2024-11-26 18:04:50.160738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.318 [2024-11-26 18:04:50.160801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.318 [2024-11-26 18:04:50.160823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:33.318 [2024-11-26 18:04:50.160841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.318 [2024-11-26 18:04:50.160918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.318 [2024-11-26 18:04:50.160931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.318 [2024-11-26 18:04:50.160942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:33.318 [2024-11-26 18:04:50.160951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.318 [2024-11-26 18:04:50.160985] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.318 [2024-11-26 18:04:50.161235] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.319 [2024-11-26 18:04:50.161264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.161285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.319 [2024-11-26 18:04:50.161295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:33.319 [2024-11-26 18:04:50.161311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.162798] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:33.319 [2024-11-26 18:04:50.165242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.165274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:33.319 [2024-11-26 18:04:50.165287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:18:33.319 [2024-11-26 18:04:50.165303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.165360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.165373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:33.319 [2024-11-26 18:04:50.165384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:33.319 [2024-11-26 18:04:50.165393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.172356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.172396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.319 [2024-11-26 18:04:50.172409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:18:33.319 [2024-11-26 18:04:50.172419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.172525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.172539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.319 [2024-11-26 18:04:50.172549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:33.319 [2024-11-26 18:04:50.172570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.172635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.172654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.319 [2024-11-26 18:04:50.172675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:33.319 [2024-11-26 18:04:50.172685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.172723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.319 [2024-11-26 18:04:50.174417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.174450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.319 [2024-11-26 18:04:50.174472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:18:33.319 [2024-11-26 18:04:50.174482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.174518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.174529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.319 [2024-11-26 18:04:50.174548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:33.319 [2024-11-26 18:04:50.174558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.174582] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:33.319 [2024-11-26 18:04:50.174604] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:33.319 [2024-11-26 18:04:50.174638] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:33.319 [2024-11-26 18:04:50.174667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:33.319 [2024-11-26 18:04:50.174734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:33.319 [2024-11-26 18:04:50.174755] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.319 [2024-11-26 18:04:50.174771] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:33.319 [2024-11-26 18:04:50.174784] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.319 [2024-11-26 18:04:50.174799] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.319 [2024-11-26 18:04:50.174810] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:33.319 [2024-11-26 18:04:50.174834] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.319 [2024-11-26 18:04:50.174843] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:33.319 [2024-11-26 18:04:50.174853] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:33.319 [2024-11-26 18:04:50.174863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.174880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.319 [2024-11-26 18:04:50.174890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:33.319 [2024-11-26 18:04:50.174904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.174966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.319 [2024-11-26 18:04:50.174982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.319 [2024-11-26 18:04:50.174992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:33.319 [2024-11-26 18:04:50.175002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.319 [2024-11-26 18:04:50.175079] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.319 [2024-11-26 18:04:50.175098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.319 [2024-11-26 18:04:50.175108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.319 [2024-11-26 18:04:50.175144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.319 [2024-11-26 18:04:50.175171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.319 [2024-11-26 18:04:50.175190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.319 [2024-11-26 18:04:50.175199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:33.319 [2024-11-26 18:04:50.175209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.319 [2024-11-26 18:04:50.175218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.319 [2024-11-26 18:04:50.175227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:33.319 [2024-11-26 18:04:50.175236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.319 [2024-11-26 18:04:50.175254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:33.319 [2024-11-26 18:04:50.175263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:33.319 [2024-11-26 18:04:50.175285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:33.319 [2024-11-26 18:04:50.175294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.319 [2024-11-26 18:04:50.175312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.319 [2024-11-26 18:04:50.175339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.319 [2024-11-26 18:04:50.175366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.319 [2024-11-26 18:04:50.175392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.319 [2024-11-26 18:04:50.175424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.319 [2024-11-26 18:04:50.175443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.319 [2024-11-26 18:04:50.175462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:33.319 [2024-11-26 18:04:50.175472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.319 [2024-11-26 18:04:50.175480] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.319 [2024-11-26 18:04:50.175490] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.319 [2024-11-26 18:04:50.175500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.319 [2024-11-26 18:04:50.175517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.319 [2024-11-26 18:04:50.175535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.319 [2024-11-26 18:04:50.175544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.319 [2024-11-26 18:04:50.175553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.319 [2024-11-26 18:04:50.175563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.319 [2024-11-26 18:04:50.175572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.320 [2024-11-26 18:04:50.175581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.320 [2024-11-26 18:04:50.175591] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.320 [2024-11-26 18:04:50.175616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.320 [2024-11-26 18:04:50.175628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:33.320 [2024-11-26 18:04:50.175638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:33.320 [2024-11-26 18:04:50.175650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:33.320 [2024-11-26 18:04:50.175660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:33.320 [2024-11-26 18:04:50.175670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:33.320 [2024-11-26 18:04:50.175681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:33.320 [2024-11-26 18:04:50.175691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:33.320 [2024-11-26 18:04:50.175701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:33.320 [2024-11-26 18:04:50.175711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:33.320 [2024-11-26 18:04:50.175721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:33.320 [2024-11-26 18:04:50.175731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:33.320 [2024-11-26 18:04:50.175741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:33.320 [2024-11-26 18:04:50.175752] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:33.320 [2024-11-26 18:04:50.175761] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.320 [2024-11-26 18:04:50.175772] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.320 [2024-11-26 18:04:50.175785] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.320 [2024-11-26 18:04:50.175796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.320 [2024-11-26 18:04:50.175806] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.320 [2024-11-26 18:04:50.175816] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.320 [2024-11-26 18:04:50.175827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.175837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.320 [2024-11-26 18:04:50.175850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:18:33.320 [2024-11-26 18:04:50.175869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.184400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.184437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.320 [2024-11-26 18:04:50.184451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.503 ms 00:18:33.320 [2024-11-26 18:04:50.184471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.184555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.184565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.320 [2024-11-26 18:04:50.184576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:33.320 [2024-11-26 18:04:50.184589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.205471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.205527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.320 [2024-11-26 18:04:50.205546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.847 ms 00:18:33.320 [2024-11-26 18:04:50.205560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.205618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.205633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.320 [2024-11-26 18:04:50.205657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.320 [2024-11-26 18:04:50.205681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.206227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.206250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.320 [2024-11-26 18:04:50.206265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:18:33.320 [2024-11-26 18:04:50.206278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.206421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.206438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.320 [2024-11-26 18:04:50.206466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:33.320 [2024-11-26 18:04:50.206485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.214048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.214094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.320 [2024-11-26 18:04:50.214108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.547 ms 00:18:33.320 [2024-11-26 18:04:50.214118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.216842] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:33.320 [2024-11-26 18:04:50.216880] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:33.320 [2024-11-26 18:04:50.216895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.216906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:33.320 [2024-11-26 18:04:50.216917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:18:33.320 [2024-11-26 18:04:50.216927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.229945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.229994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:33.320 [2024-11-26 18:04:50.230009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.996 ms 00:18:33.320 [2024-11-26 18:04:50.230020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.232488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.232522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:33.320 [2024-11-26 18:04:50.232535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:18:33.320 [2024-11-26 18:04:50.232545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.233863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.233893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:33.320 [2024-11-26 18:04:50.233910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:18:33.320 [2024-11-26 18:04:50.233920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.320 [2024-11-26 18:04:50.234110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.320 [2024-11-26 18:04:50.234129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.320 [2024-11-26 18:04:50.234141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:33.320 [2024-11-26 18:04:50.234154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.257940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.258016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:33.579 [2024-11-26 18:04:50.258040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.787 ms 00:18:33.579 [2024-11-26 18:04:50.258051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.264869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:33.579 [2024-11-26 18:04:50.268314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.268350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.579 [2024-11-26 18:04:50.268366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.205 ms 00:18:33.579 [2024-11-26 18:04:50.268387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.268498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.268511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:33.579 [2024-11-26 18:04:50.268523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:33.579 [2024-11-26 18:04:50.268533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.268614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.268626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.579 [2024-11-26 18:04:50.268636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:33.579 [2024-11-26 18:04:50.268646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.270804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.270833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:33.579 [2024-11-26 18:04:50.270853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:18:33.579 [2024-11-26 18:04:50.270863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.270904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.579 [2024-11-26 18:04:50.270915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.579 [2024-11-26 18:04:50.270936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:33.579 [2024-11-26 18:04:50.270946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.579 [2024-11-26 18:04:50.270985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:33.579 [2024-11-26 18:04:50.271005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.580 [2024-11-26 18:04:50.271018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:33.580 [2024-11-26 18:04:50.271028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:33.580 [2024-11-26 18:04:50.271038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.580 [2024-11-26 18:04:50.274702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.580 [2024-11-26 18:04:50.274736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.580 [2024-11-26 18:04:50.274750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:18:33.580 [2024-11-26 18:04:50.274766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.580 [2024-11-26 18:04:50.274833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.580 [2024-11-26 18:04:50.274857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.580 [2024-11-26 18:04:50.274868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:33.580 [2024-11-26 18:04:50.274877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.580 [2024-11-26 18:04:50.275991] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.013 ms, result 0 00:18:34.518  [2024-11-26T18:04:52.379Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-26T18:04:53.313Z] Copying: 57/1024 [MB] (29 MBps) [2024-11-26T18:04:54.730Z] Copying: 88/1024 [MB] (30 MBps) [2024-11-26T18:04:55.296Z] Copying: 115/1024 [MB] (27 MBps) [2024-11-26T18:04:56.673Z] Copying: 144/1024 [MB] (28 MBps) [2024-11-26T18:04:57.608Z] Copying: 171/1024 [MB] (27 MBps) [2024-11-26T18:04:58.543Z] Copying: 198/1024 [MB] (27 MBps) [2024-11-26T18:04:59.477Z] Copying: 226/1024 [MB] (27 MBps) [2024-11-26T18:05:00.416Z] Copying: 254/1024 [MB] (27 MBps) [2024-11-26T18:05:01.353Z] Copying: 281/1024 [MB] (27 MBps) [2024-11-26T18:05:02.290Z] Copying: 309/1024 [MB] (27 MBps) [2024-11-26T18:05:03.668Z] Copying: 336/1024 [MB] (27 MBps) [2024-11-26T18:05:04.604Z] Copying: 364/1024 [MB] (28 MBps) [2024-11-26T18:05:05.540Z] Copying: 390/1024 [MB] (26 MBps) [2024-11-26T18:05:06.496Z] Copying: 417/1024 [MB] (26 MBps) [2024-11-26T18:05:07.435Z] Copying: 443/1024 [MB] (25 MBps) [2024-11-26T18:05:08.372Z] Copying: 467/1024 [MB] (24 MBps) [2024-11-26T18:05:09.308Z] Copying: 494/1024 [MB] (26 MBps) [2024-11-26T18:05:10.687Z] Copying: 521/1024 [MB] (27 MBps) [2024-11-26T18:05:11.254Z] Copying: 549/1024 [MB] (28 MBps) [2024-11-26T18:05:12.632Z] Copying: 577/1024 [MB] (27 MBps) [2024-11-26T18:05:13.569Z] Copying: 605/1024 [MB] (27 MBps) [2024-11-26T18:05:14.508Z] Copying: 632/1024 [MB] (27 MBps) [2024-11-26T18:05:15.446Z] Copying: 661/1024 [MB] (28 MBps) [2024-11-26T18:05:16.383Z] Copying: 690/1024 [MB] (29 MBps) [2024-11-26T18:05:17.321Z] Copying: 719/1024 [MB] (28 MBps) [2024-11-26T18:05:18.259Z] Copying: 747/1024 [MB] (28 MBps) [2024-11-26T18:05:19.640Z] Copying: 775/1024 [MB] (27 MBps) [2024-11-26T18:05:20.575Z] Copying: 803/1024 [MB] (27 MBps) [2024-11-26T18:05:21.512Z] Copying: 829/1024 [MB] (26 MBps) [2024-11-26T18:05:22.449Z] Copying: 856/1024 [MB] (27 MBps) [2024-11-26T18:05:23.385Z] Copying: 884/1024 [MB] (27 MBps) [2024-11-26T18:05:24.320Z] Copying: 912/1024 [MB] (28 MBps) [2024-11-26T18:05:25.255Z] Copying: 939/1024 [MB] (27 MBps) [2024-11-26T18:05:26.643Z] Copying: 966/1024 [MB] (27 MBps) [2024-11-26T18:05:27.580Z] Copying: 994/1024 [MB] (27 MBps) [2024-11-26T18:05:28.147Z] Copying: 1020/1024 [MB] (26 MBps) [2024-11-26T18:05:28.147Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 18:05:28.143843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.221 [2024-11-26 18:05:28.143911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:11.221 [2024-11-26 18:05:28.143928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:11.221 [2024-11-26 18:05:28.143949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.146517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.481 [2024-11-26 18:05:28.148574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.148612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:11.481 [2024-11-26 18:05:28.148627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:19:11.481 [2024-11-26 18:05:28.148637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.157780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.157825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:11.481 [2024-11-26 18:05:28.157839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.384 ms 00:19:11.481 [2024-11-26 18:05:28.157849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.181290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.181348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:11.481 [2024-11-26 18:05:28.181365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.457 ms 00:19:11.481 [2024-11-26 18:05:28.181377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.186520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.186571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:11.481 [2024-11-26 18:05:28.186584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.115 ms 00:19:11.481 [2024-11-26 18:05:28.186594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.188242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.188277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:11.481 [2024-11-26 18:05:28.188288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:19:11.481 [2024-11-26 18:05:28.188298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.191987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.192028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:11.481 [2024-11-26 18:05:28.192041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:19:11.481 [2024-11-26 18:05:28.192050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.293774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.293861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:11.481 [2024-11-26 18:05:28.293878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.851 ms 00:19:11.481 [2024-11-26 18:05:28.293890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.296344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.296391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:11.481 [2024-11-26 18:05:28.296404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:19:11.481 [2024-11-26 18:05:28.296414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.297839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.297876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:11.481 [2024-11-26 18:05:28.297888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:19:11.481 [2024-11-26 18:05:28.297899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.299127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.299164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:11.481 [2024-11-26 18:05:28.299176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:19:11.481 [2024-11-26 18:05:28.299186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.300257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.481 [2024-11-26 18:05:28.300291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:11.481 [2024-11-26 18:05:28.300304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:19:11.481 [2024-11-26 18:05:28.300314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.481 [2024-11-26 18:05:28.300339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:11.481 [2024-11-26 18:05:28.300356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111872 / 261120 wr_cnt: 1 state: open 00:19:11.481 [2024-11-26 18:05:28.300369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:11.481 [2024-11-26 18:05:28.300571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.300989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.482 [2024-11-26 18:05:28.301432] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.482 [2024-11-26 18:05:28.301442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8373d9-959b-4681-90e1-ca6922fae046 00:19:11.482 [2024-11-26 18:05:28.301463] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111872 00:19:11.482 [2024-11-26 18:05:28.301474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112832 00:19:11.483 [2024-11-26 18:05:28.301487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111872 00:19:11.483 [2024-11-26 18:05:28.301506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:19:11.483 [2024-11-26 18:05:28.301520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.483 [2024-11-26 18:05:28.301530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.483 [2024-11-26 18:05:28.301552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.483 [2024-11-26 18:05:28.301561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.483 [2024-11-26 18:05:28.301569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.483 [2024-11-26 18:05:28.301579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.483 [2024-11-26 18:05:28.301589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.483 [2024-11-26 18:05:28.301599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:19:11.483 [2024-11-26 18:05:28.301608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.303355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.483 [2024-11-26 18:05:28.303380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.483 [2024-11-26 18:05:28.303397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:19:11.483 [2024-11-26 18:05:28.303407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.303486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.483 [2024-11-26 18:05:28.303497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.483 [2024-11-26 18:05:28.303508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:11.483 [2024-11-26 18:05:28.303518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.310320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.310356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.483 [2024-11-26 18:05:28.310369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.310379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.310436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.310446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.483 [2024-11-26 18:05:28.310475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.310485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.310561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.310581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.483 [2024-11-26 18:05:28.310591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.310601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.310619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.310629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.483 [2024-11-26 18:05:28.310639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.310649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.323851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.323903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.483 [2024-11-26 18:05:28.323917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.323927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.328567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.328604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.483 [2024-11-26 18:05:28.328617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.328627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.328688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.328700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.483 [2024-11-26 18:05:28.328724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.328734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.328766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.328777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.483 [2024-11-26 18:05:28.328787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.328797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.328883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.328895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.483 [2024-11-26 18:05:28.328905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.328919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.328953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.328965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:11.483 [2024-11-26 18:05:28.328975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.328985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.329021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.329033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.483 [2024-11-26 18:05:28.329042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.329056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.329099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.483 [2024-11-26 18:05:28.329110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.483 [2024-11-26 18:05:28.329120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.483 [2024-11-26 18:05:28.329129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.483 [2024-11-26 18:05:28.329248] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 187.341 ms, result 0 00:19:12.420 00:19:12.420 00:19:12.420 18:05:29 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:12.420 [2024-11-26 18:05:29.090322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.420 [2024-11-26 18:05:29.090524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85486 ] 00:19:12.420 [2024-11-26 18:05:29.245205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.420 [2024-11-26 18:05:29.292688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.680 [2024-11-26 18:05:29.396638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.680 [2024-11-26 18:05:29.396772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.680 [2024-11-26 18:05:29.548533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.548595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.680 [2024-11-26 18:05:29.548611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:12.680 [2024-11-26 18:05:29.548622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.548689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.548703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.680 [2024-11-26 18:05:29.548720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:12.680 [2024-11-26 18:05:29.548736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.548763] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.680 [2024-11-26 18:05:29.549055] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.680 [2024-11-26 18:05:29.549075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.549091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.680 [2024-11-26 18:05:29.549102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:12.680 [2024-11-26 18:05:29.549119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.550671] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.680 [2024-11-26 18:05:29.553342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.553389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.680 [2024-11-26 18:05:29.553403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:19:12.680 [2024-11-26 18:05:29.553418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.553490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.553511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.680 [2024-11-26 18:05:29.553522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:12.680 [2024-11-26 18:05:29.553532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.560333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.560371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.680 [2024-11-26 18:05:29.560385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:19:12.680 [2024-11-26 18:05:29.560395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.560485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.560499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.680 [2024-11-26 18:05:29.560509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:12.680 [2024-11-26 18:05:29.560519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.560588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.560607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.680 [2024-11-26 18:05:29.560618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.680 [2024-11-26 18:05:29.560634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.560665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.680 [2024-11-26 18:05:29.562325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.562363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.680 [2024-11-26 18:05:29.562374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:19:12.680 [2024-11-26 18:05:29.562384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.562416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.562427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.680 [2024-11-26 18:05:29.562437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:12.680 [2024-11-26 18:05:29.562450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.562486] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.680 [2024-11-26 18:05:29.562508] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:12.680 [2024-11-26 18:05:29.562556] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.680 [2024-11-26 18:05:29.562574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:12.680 [2024-11-26 18:05:29.562638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:12.680 [2024-11-26 18:05:29.562655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.680 [2024-11-26 18:05:29.562670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:12.680 [2024-11-26 18:05:29.562693] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.680 [2024-11-26 18:05:29.562704] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.680 [2024-11-26 18:05:29.562715] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:12.680 [2024-11-26 18:05:29.562725] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.680 [2024-11-26 18:05:29.562735] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:12.680 [2024-11-26 18:05:29.562744] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:12.680 [2024-11-26 18:05:29.562761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.562771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.680 [2024-11-26 18:05:29.562788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:12.680 [2024-11-26 18:05:29.562801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.562858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.680 [2024-11-26 18:05:29.562868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.680 [2024-11-26 18:05:29.562878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:12.680 [2024-11-26 18:05:29.562893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.680 [2024-11-26 18:05:29.562966] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.680 [2024-11-26 18:05:29.562979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.680 [2024-11-26 18:05:29.562990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.680 [2024-11-26 18:05:29.563000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.680 [2024-11-26 18:05:29.563009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.680 [2024-11-26 18:05:29.563018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.680 [2024-11-26 18:05:29.563027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:12.680 [2024-11-26 18:05:29.563036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.681 [2024-11-26 18:05:29.563045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.681 [2024-11-26 18:05:29.563063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.681 [2024-11-26 18:05:29.563072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:12.681 [2024-11-26 18:05:29.563081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.681 [2024-11-26 18:05:29.563090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.681 [2024-11-26 18:05:29.563099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:12.681 [2024-11-26 18:05:29.563111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.681 [2024-11-26 18:05:29.563129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:12.681 [2024-11-26 18:05:29.563138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:12.681 [2024-11-26 18:05:29.563158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:12.681 [2024-11-26 18:05:29.563167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.681 [2024-11-26 18:05:29.563184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.681 [2024-11-26 18:05:29.563211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.681 [2024-11-26 18:05:29.563237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.681 [2024-11-26 18:05:29.563266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.681 [2024-11-26 18:05:29.563293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.681 [2024-11-26 18:05:29.563310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.681 [2024-11-26 18:05:29.563319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:12.681 [2024-11-26 18:05:29.563328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.681 [2024-11-26 18:05:29.563336] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.681 [2024-11-26 18:05:29.563349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.681 [2024-11-26 18:05:29.563358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.681 [2024-11-26 18:05:29.563378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.681 [2024-11-26 18:05:29.563387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.681 [2024-11-26 18:05:29.563396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.681 [2024-11-26 18:05:29.563408] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.681 [2024-11-26 18:05:29.563417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.681 [2024-11-26 18:05:29.563426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.681 [2024-11-26 18:05:29.563435] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.681 [2024-11-26 18:05:29.563448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.681 [2024-11-26 18:05:29.563470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:12.681 [2024-11-26 18:05:29.563480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:12.681 [2024-11-26 18:05:29.563490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:12.681 [2024-11-26 18:05:29.563500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:12.681 [2024-11-26 18:05:29.563510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:12.681 [2024-11-26 18:05:29.563520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:12.681 [2024-11-26 18:05:29.563531] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:12.681 [2024-11-26 18:05:29.563541] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:12.681 [2024-11-26 18:05:29.563551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:12.681 [2024-11-26 18:05:29.563561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:12.681 [2024-11-26 18:05:29.563571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:12.681 [2024-11-26 18:05:29.563584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:12.681 [2024-11-26 18:05:29.563594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:12.681 [2024-11-26 18:05:29.563605] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.681 [2024-11-26 18:05:29.563616] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.681 [2024-11-26 18:05:29.563626] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.681 [2024-11-26 18:05:29.563637] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.681 [2024-11-26 18:05:29.563647] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.681 [2024-11-26 18:05:29.563656] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.681 [2024-11-26 18:05:29.563667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.563676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.681 [2024-11-26 18:05:29.563693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:19:12.681 [2024-11-26 18:05:29.563715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.572732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.572772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.681 [2024-11-26 18:05:29.572787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.980 ms 00:19:12.681 [2024-11-26 18:05:29.572806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.572891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.572909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.681 [2024-11-26 18:05:29.572920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:12.681 [2024-11-26 18:05:29.572930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.591700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.591753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.681 [2024-11-26 18:05:29.591776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.736 ms 00:19:12.681 [2024-11-26 18:05:29.591800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.591854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.591869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.681 [2024-11-26 18:05:29.591883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.681 [2024-11-26 18:05:29.591910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.592429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.592474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.681 [2024-11-26 18:05:29.592489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:19:12.681 [2024-11-26 18:05:29.592503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.592648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.592664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.681 [2024-11-26 18:05:29.592678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:12.681 [2024-11-26 18:05:29.592690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.600275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.600323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.681 [2024-11-26 18:05:29.600344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.564 ms 00:19:12.681 [2024-11-26 18:05:29.600354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.681 [2024-11-26 18:05:29.603145] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:12.681 [2024-11-26 18:05:29.603187] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.681 [2024-11-26 18:05:29.603206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.681 [2024-11-26 18:05:29.603223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.681 [2024-11-26 18:05:29.603236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:19:12.682 [2024-11-26 18:05:29.603253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.616179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.616239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.939 [2024-11-26 18:05:29.616256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.890 ms 00:19:12.939 [2024-11-26 18:05:29.616278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.618371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.618408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.939 [2024-11-26 18:05:29.618421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:19:12.939 [2024-11-26 18:05:29.618431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.619837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.619868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.939 [2024-11-26 18:05:29.619879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:19:12.939 [2024-11-26 18:05:29.619888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.620084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.620105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.939 [2024-11-26 18:05:29.620123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:12.939 [2024-11-26 18:05:29.620133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.644979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.645227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.939 [2024-11-26 18:05:29.645255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:19:12.939 [2024-11-26 18:05:29.645267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.652054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:12.939 [2024-11-26 18:05:29.655592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.655627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.939 [2024-11-26 18:05:29.655641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.290 ms 00:19:12.939 [2024-11-26 18:05:29.655652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.655750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.655766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.939 [2024-11-26 18:05:29.655778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:12.939 [2024-11-26 18:05:29.655787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.657439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.657501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.939 [2024-11-26 18:05:29.657526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:19:12.939 [2024-11-26 18:05:29.657539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.659801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.659833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:12.939 [2024-11-26 18:05:29.659849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:19:12.939 [2024-11-26 18:05:29.659859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.659903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.659916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.939 [2024-11-26 18:05:29.659926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.939 [2024-11-26 18:05:29.659940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.939 [2024-11-26 18:05:29.659994] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.939 [2024-11-26 18:05:29.660007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.939 [2024-11-26 18:05:29.660027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.939 [2024-11-26 18:05:29.660045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:12.939 [2024-11-26 18:05:29.660054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.940 [2024-11-26 18:05:29.663724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.940 [2024-11-26 18:05:29.663764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.940 [2024-11-26 18:05:29.663778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:19:12.940 [2024-11-26 18:05:29.663791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.940 [2024-11-26 18:05:29.663854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.940 [2024-11-26 18:05:29.663866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.940 [2024-11-26 18:05:29.663881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:12.940 [2024-11-26 18:05:29.663899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.940 [2024-11-26 18:05:29.669309] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.959 ms, result 0 00:19:14.312  [2024-11-26T18:05:32.173Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T18:05:33.109Z] Copying: 54/1024 [MB] (28 MBps) [2024-11-26T18:05:34.047Z] Copying: 81/1024 [MB] (26 MBps) [2024-11-26T18:05:34.982Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-26T18:05:35.915Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-26T18:05:36.885Z] Copying: 164/1024 [MB] (28 MBps) [2024-11-26T18:05:38.256Z] Copying: 198/1024 [MB] (33 MBps) [2024-11-26T18:05:39.229Z] Copying: 232/1024 [MB] (34 MBps) [2024-11-26T18:05:40.164Z] Copying: 263/1024 [MB] (30 MBps) [2024-11-26T18:05:41.100Z] Copying: 297/1024 [MB] (33 MBps) [2024-11-26T18:05:42.036Z] Copying: 326/1024 [MB] (29 MBps) [2024-11-26T18:05:43.046Z] Copying: 355/1024 [MB] (28 MBps) [2024-11-26T18:05:43.990Z] Copying: 383/1024 [MB] (27 MBps) [2024-11-26T18:05:44.930Z] Copying: 412/1024 [MB] (29 MBps) [2024-11-26T18:05:46.308Z] Copying: 444/1024 [MB] (31 MBps) [2024-11-26T18:05:46.876Z] Copying: 473/1024 [MB] (29 MBps) [2024-11-26T18:05:48.255Z] Copying: 503/1024 [MB] (29 MBps) [2024-11-26T18:05:49.191Z] Copying: 531/1024 [MB] (28 MBps) [2024-11-26T18:05:50.124Z] Copying: 559/1024 [MB] (27 MBps) [2024-11-26T18:05:51.130Z] Copying: 588/1024 [MB] (28 MBps) [2024-11-26T18:05:52.065Z] Copying: 616/1024 [MB] (28 MBps) [2024-11-26T18:05:52.999Z] Copying: 644/1024 [MB] (28 MBps) [2024-11-26T18:05:54.032Z] Copying: 675/1024 [MB] (30 MBps) [2024-11-26T18:05:54.966Z] Copying: 703/1024 [MB] (28 MBps) [2024-11-26T18:05:55.900Z] Copying: 731/1024 [MB] (27 MBps) [2024-11-26T18:05:57.272Z] Copying: 759/1024 [MB] (28 MBps) [2024-11-26T18:05:58.205Z] Copying: 788/1024 [MB] (28 MBps) [2024-11-26T18:05:59.138Z] Copying: 817/1024 [MB] (29 MBps) [2024-11-26T18:06:00.178Z] Copying: 846/1024 [MB] (29 MBps) [2024-11-26T18:06:01.116Z] Copying: 875/1024 [MB] (28 MBps) [2024-11-26T18:06:02.054Z] Copying: 902/1024 [MB] (26 MBps) [2024-11-26T18:06:02.992Z] Copying: 931/1024 [MB] (29 MBps) [2024-11-26T18:06:03.930Z] Copying: 960/1024 [MB] (28 MBps) [2024-11-26T18:06:04.905Z] Copying: 990/1024 [MB] (29 MBps) [2024-11-26T18:06:04.905Z] Copying: 1022/1024 [MB] (32 MBps) [2024-11-26T18:06:05.166Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-26 18:06:04.970785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.970906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:48.240 [2024-11-26 18:06:04.970941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.240 [2024-11-26 18:06:04.970981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.971063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.240 [2024-11-26 18:06:04.973120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.973254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:48.240 [2024-11-26 18:06:04.973303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:19:48.240 [2024-11-26 18:06:04.973345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.973975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.975361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:48.240 [2024-11-26 18:06:04.975414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:48.240 [2024-11-26 18:06:04.975477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.981565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.981781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:48.240 [2024-11-26 18:06:04.981819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.019 ms 00:19:48.240 [2024-11-26 18:06:04.981840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.988707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.988847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:48.240 [2024-11-26 18:06:04.988957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:19:48.240 [2024-11-26 18:06:04.988996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.990657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.990801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:48.240 [2024-11-26 18:06:04.990873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:19:48.240 [2024-11-26 18:06:04.990910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:04.994976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:04.995119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:48.240 [2024-11-26 18:06:04.995217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.016 ms 00:19:48.240 [2024-11-26 18:06:04.995255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.095305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:05.095542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:48.240 [2024-11-26 18:06:05.095643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.132 ms 00:19:48.240 [2024-11-26 18:06:05.095683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.098060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:05.098200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:48.240 [2024-11-26 18:06:05.098296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:19:48.240 [2024-11-26 18:06:05.098334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.099746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:05.099869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:48.240 [2024-11-26 18:06:05.099936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:19:48.240 [2024-11-26 18:06:05.099971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.101105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:05.101233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:48.240 [2024-11-26 18:06:05.101307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:19:48.240 [2024-11-26 18:06:05.101342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.102378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.240 [2024-11-26 18:06:05.102523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:48.240 [2024-11-26 18:06:05.102614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:19:48.240 [2024-11-26 18:06:05.102651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.240 [2024-11-26 18:06:05.102705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:48.240 [2024-11-26 18:06:05.102913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:19:48.240 [2024-11-26 18:06:05.102972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.103977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.104974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:48.240 [2024-11-26 18:06:05.105324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.105985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:48.241 [2024-11-26 18:06:05.106004] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:48.241 [2024-11-26 18:06:05.106014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a8373d9-959b-4681-90e1-ca6922fae046 00:19:48.241 [2024-11-26 18:06:05.106026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:19:48.241 [2024-11-26 18:06:05.106036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 22720 00:19:48.241 [2024-11-26 18:06:05.106046] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 21760 00:19:48.241 [2024-11-26 18:06:05.106057] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0441 00:19:48.241 [2024-11-26 18:06:05.106068] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:48.241 [2024-11-26 18:06:05.106091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:48.241 [2024-11-26 18:06:05.106101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:48.241 [2024-11-26 18:06:05.106111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:48.241 [2024-11-26 18:06:05.106120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:48.241 [2024-11-26 18:06:05.106132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.241 [2024-11-26 18:06:05.106143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:48.241 [2024-11-26 18:06:05.106153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:19:48.241 [2024-11-26 18:06:05.106163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.107917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.241 [2024-11-26 18:06:05.107940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:48.241 [2024-11-26 18:06:05.107962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:19:48.241 [2024-11-26 18:06:05.107977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.108057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.241 [2024-11-26 18:06:05.108069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:48.241 [2024-11-26 18:06:05.108080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:48.241 [2024-11-26 18:06:05.108091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.115183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.241 [2024-11-26 18:06:05.115339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.241 [2024-11-26 18:06:05.115415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.241 [2024-11-26 18:06:05.115495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.115575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.241 [2024-11-26 18:06:05.115610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.241 [2024-11-26 18:06:05.115642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.241 [2024-11-26 18:06:05.115724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.115840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.241 [2024-11-26 18:06:05.115881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.241 [2024-11-26 18:06:05.115924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.241 [2024-11-26 18:06:05.115955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.241 [2024-11-26 18:06:05.116097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.241 [2024-11-26 18:06:05.116111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.241 [2024-11-26 18:06:05.116121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.116139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.129506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.129560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.242 [2024-11-26 18:06:05.129574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.129584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.242 [2024-11-26 18:06:05.134327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.242 [2024-11-26 18:06:05.134429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.242 [2024-11-26 18:06:05.134538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.242 [2024-11-26 18:06:05.134658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:48.242 [2024-11-26 18:06:05.134734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.242 [2024-11-26 18:06:05.134803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.134868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.242 [2024-11-26 18:06:05.134879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.242 [2024-11-26 18:06:05.134889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.242 [2024-11-26 18:06:05.134899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.242 [2024-11-26 18:06:05.135019] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 164.490 ms, result 0 00:19:48.500 00:19:48.500 00:19:48.500 18:06:05 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:50.406 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:50.406 18:06:07 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:50.406 18:06:07 -- ftl/restore.sh@85 -- # restore_kill 00:19:50.406 18:06:07 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:50.406 18:06:07 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:50.406 18:06:07 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.406 18:06:07 -- ftl/restore.sh@32 -- # killprocess 84090 00:19:50.406 18:06:07 -- common/autotest_common.sh@936 -- # '[' -z 84090 ']' 00:19:50.406 18:06:07 -- common/autotest_common.sh@940 -- # kill -0 84090 00:19:50.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84090) - No such process 00:19:50.406 Process with pid 84090 is not found 00:19:50.406 18:06:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84090 is not found' 00:19:50.406 18:06:07 -- ftl/restore.sh@33 -- # remove_shm 00:19:50.406 Remove shared memory files 00:19:50.406 18:06:07 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:50.406 18:06:07 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:50.406 18:06:07 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:50.406 18:06:07 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:50.406 18:06:07 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:50.406 18:06:07 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:50.406 ************************************ 00:19:50.406 END TEST ftl_restore 00:19:50.406 ************************************ 00:19:50.406 00:19:50.406 real 2m49.670s 00:19:50.406 user 2m37.161s 00:19:50.406 sys 0m13.981s 00:19:50.406 18:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:50.406 18:06:07 -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 18:06:07 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:19:50.665 18:06:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:50.665 18:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.665 18:06:07 -- common/autotest_common.sh@10 -- # set +x 00:19:50.665 ************************************ 00:19:50.665 START TEST ftl_dirty_shutdown 00:19:50.665 ************************************ 00:19:50.665 18:06:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:19:50.665 * Looking for test storage... 00:19:50.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.665 18:06:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:50.665 18:06:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:50.665 18:06:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:50.924 18:06:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:50.924 18:06:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:50.924 18:06:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:50.924 18:06:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:50.924 18:06:07 -- scripts/common.sh@335 -- # IFS=.-: 00:19:50.924 18:06:07 -- scripts/common.sh@335 -- # read -ra ver1 00:19:50.924 18:06:07 -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.924 18:06:07 -- scripts/common.sh@336 -- # read -ra ver2 00:19:50.924 18:06:07 -- scripts/common.sh@337 -- # local 'op=<' 00:19:50.924 18:06:07 -- scripts/common.sh@339 -- # ver1_l=2 00:19:50.924 18:06:07 -- scripts/common.sh@340 -- # ver2_l=1 00:19:50.924 18:06:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:50.924 18:06:07 -- scripts/common.sh@343 -- # case "$op" in 00:19:50.924 18:06:07 -- scripts/common.sh@344 -- # : 1 00:19:50.924 18:06:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:50.924 18:06:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.924 18:06:07 -- scripts/common.sh@364 -- # decimal 1 00:19:50.924 18:06:07 -- scripts/common.sh@352 -- # local d=1 00:19:50.924 18:06:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.924 18:06:07 -- scripts/common.sh@354 -- # echo 1 00:19:50.924 18:06:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:50.924 18:06:07 -- scripts/common.sh@365 -- # decimal 2 00:19:50.924 18:06:07 -- scripts/common.sh@352 -- # local d=2 00:19:50.924 18:06:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.924 18:06:07 -- scripts/common.sh@354 -- # echo 2 00:19:50.924 18:06:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:50.924 18:06:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:50.924 18:06:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:50.924 18:06:07 -- scripts/common.sh@367 -- # return 0 00:19:50.924 18:06:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.924 18:06:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:50.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.924 --rc genhtml_branch_coverage=1 00:19:50.924 --rc genhtml_function_coverage=1 00:19:50.924 --rc genhtml_legend=1 00:19:50.924 --rc geninfo_all_blocks=1 00:19:50.924 --rc geninfo_unexecuted_blocks=1 00:19:50.924 00:19:50.924 ' 00:19:50.924 18:06:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:50.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.924 --rc genhtml_branch_coverage=1 00:19:50.924 --rc genhtml_function_coverage=1 00:19:50.924 --rc genhtml_legend=1 00:19:50.924 --rc geninfo_all_blocks=1 00:19:50.924 --rc geninfo_unexecuted_blocks=1 00:19:50.924 00:19:50.924 ' 00:19:50.924 18:06:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:50.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.924 --rc genhtml_branch_coverage=1 00:19:50.924 --rc genhtml_function_coverage=1 00:19:50.924 --rc genhtml_legend=1 00:19:50.924 --rc geninfo_all_blocks=1 00:19:50.924 --rc geninfo_unexecuted_blocks=1 00:19:50.924 00:19:50.924 ' 00:19:50.924 18:06:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:50.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.924 --rc genhtml_branch_coverage=1 00:19:50.924 --rc genhtml_function_coverage=1 00:19:50.924 --rc genhtml_legend=1 00:19:50.924 --rc geninfo_all_blocks=1 00:19:50.924 --rc geninfo_unexecuted_blocks=1 00:19:50.924 00:19:50.924 ' 00:19:50.924 18:06:07 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:50.924 18:06:07 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:19:50.924 18:06:07 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.924 18:06:07 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.924 18:06:07 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:50.924 18:06:07 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:50.924 18:06:07 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.924 18:06:07 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:50.924 18:06:07 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:50.924 18:06:07 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.924 18:06:07 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.924 18:06:07 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:50.924 18:06:07 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:50.924 18:06:07 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.924 18:06:07 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.924 18:06:07 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:50.924 18:06:07 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:50.924 18:06:07 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.924 18:06:07 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.924 18:06:07 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:50.924 18:06:07 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:50.924 18:06:07 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.924 18:06:07 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.924 18:06:07 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.925 18:06:07 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.925 18:06:07 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:50.925 18:06:07 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:50.925 18:06:07 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.925 18:06:07 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@45 -- # svcpid=85940 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:50.925 18:06:07 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 85940 00:19:50.925 18:06:07 -- common/autotest_common.sh@829 -- # '[' -z 85940 ']' 00:19:50.925 18:06:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.925 18:06:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.925 18:06:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.925 18:06:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.925 18:06:07 -- common/autotest_common.sh@10 -- # set +x 00:19:50.925 [2024-11-26 18:06:07.784719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:50.925 [2024-11-26 18:06:07.785229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85940 ] 00:19:51.184 [2024-11-26 18:06:07.938274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.184 [2024-11-26 18:06:08.001895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.184 [2024-11-26 18:06:08.002594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.120 18:06:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.120 18:06:08 -- common/autotest_common.sh@862 -- # return 0 00:19:52.120 18:06:08 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:52.120 18:06:08 -- ftl/common.sh@54 -- # local name=nvme0 00:19:52.120 18:06:08 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:52.120 18:06:08 -- ftl/common.sh@56 -- # local size=103424 00:19:52.120 18:06:08 -- ftl/common.sh@59 -- # local base_bdev 00:19:52.120 18:06:08 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:52.120 18:06:09 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:52.120 18:06:09 -- ftl/common.sh@62 -- # local base_size 00:19:52.120 18:06:09 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:52.120 18:06:09 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:19:52.120 18:06:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:52.120 18:06:09 -- common/autotest_common.sh@1369 -- # local bs 00:19:52.120 18:06:09 -- common/autotest_common.sh@1370 -- # local nb 00:19:52.120 18:06:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:52.384 18:06:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:52.384 { 00:19:52.384 "name": "nvme0n1", 00:19:52.385 "aliases": [ 00:19:52.385 "eaf7ea42-3ec8-4a36-8639-a7df934c730b" 00:19:52.385 ], 00:19:52.385 "product_name": "NVMe disk", 00:19:52.385 "block_size": 4096, 00:19:52.385 "num_blocks": 1310720, 00:19:52.385 "uuid": "eaf7ea42-3ec8-4a36-8639-a7df934c730b", 00:19:52.385 "assigned_rate_limits": { 00:19:52.385 "rw_ios_per_sec": 0, 00:19:52.385 "rw_mbytes_per_sec": 0, 00:19:52.385 "r_mbytes_per_sec": 0, 00:19:52.385 "w_mbytes_per_sec": 0 00:19:52.385 }, 00:19:52.385 "claimed": true, 00:19:52.385 "claim_type": "read_many_write_one", 00:19:52.385 "zoned": false, 00:19:52.385 "supported_io_types": { 00:19:52.385 "read": true, 00:19:52.385 "write": true, 00:19:52.385 "unmap": true, 00:19:52.385 "write_zeroes": true, 00:19:52.385 "flush": true, 00:19:52.385 "reset": true, 00:19:52.385 "compare": true, 00:19:52.385 "compare_and_write": false, 00:19:52.385 "abort": true, 00:19:52.385 "nvme_admin": true, 00:19:52.385 "nvme_io": true 00:19:52.385 }, 00:19:52.385 "driver_specific": { 00:19:52.385 "nvme": [ 00:19:52.385 { 00:19:52.385 "pci_address": "0000:00:07.0", 00:19:52.385 "trid": { 00:19:52.385 "trtype": "PCIe", 00:19:52.385 "traddr": "0000:00:07.0" 00:19:52.385 }, 00:19:52.385 "ctrlr_data": { 00:19:52.385 "cntlid": 0, 00:19:52.385 "vendor_id": "0x1b36", 00:19:52.385 "model_number": "QEMU NVMe Ctrl", 00:19:52.385 "serial_number": "12341", 00:19:52.385 "firmware_revision": "8.0.0", 00:19:52.385 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:52.385 "oacs": { 00:19:52.385 "security": 0, 00:19:52.385 "format": 1, 00:19:52.385 "firmware": 0, 00:19:52.385 "ns_manage": 1 00:19:52.385 }, 00:19:52.385 "multi_ctrlr": false, 00:19:52.385 "ana_reporting": false 00:19:52.385 }, 00:19:52.385 "vs": { 00:19:52.385 "nvme_version": "1.4" 00:19:52.385 }, 00:19:52.385 "ns_data": { 00:19:52.385 "id": 1, 00:19:52.385 "can_share": false 00:19:52.385 } 00:19:52.385 } 00:19:52.385 ], 00:19:52.385 "mp_policy": "active_passive" 00:19:52.385 } 00:19:52.385 } 00:19:52.385 ]' 00:19:52.385 18:06:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:52.385 18:06:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:52.385 18:06:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:52.648 18:06:09 -- common/autotest_common.sh@1373 -- # nb=1310720 00:19:52.648 18:06:09 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:19:52.648 18:06:09 -- common/autotest_common.sh@1377 -- # echo 5120 00:19:52.648 18:06:09 -- ftl/common.sh@63 -- # base_size=5120 00:19:52.648 18:06:09 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:52.648 18:06:09 -- ftl/common.sh@67 -- # clear_lvols 00:19:52.648 18:06:09 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:52.648 18:06:09 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:52.648 18:06:09 -- ftl/common.sh@28 -- # stores=53078330-6434-4d0c-9a59-26d3452ec53b 00:19:52.648 18:06:09 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:52.648 18:06:09 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53078330-6434-4d0c-9a59-26d3452ec53b 00:19:52.907 18:06:09 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:53.165 18:06:10 -- ftl/common.sh@68 -- # lvs=f1bb7d02-aedd-44cb-ba29-289bb8adb305 00:19:53.165 18:06:10 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1bb7d02-aedd-44cb-ba29-289bb8adb305 00:19:53.424 18:06:10 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.424 18:06:10 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:19:53.424 18:06:10 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.425 18:06:10 -- ftl/common.sh@35 -- # local name=nvc0 00:19:53.425 18:06:10 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:53.425 18:06:10 -- ftl/common.sh@37 -- # local base_bdev=dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.425 18:06:10 -- ftl/common.sh@38 -- # local cache_size= 00:19:53.425 18:06:10 -- ftl/common.sh@41 -- # get_bdev_size dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.425 18:06:10 -- common/autotest_common.sh@1367 -- # local bdev_name=dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.425 18:06:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:53.425 18:06:10 -- common/autotest_common.sh@1369 -- # local bs 00:19:53.425 18:06:10 -- common/autotest_common.sh@1370 -- # local nb 00:19:53.425 18:06:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.684 18:06:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:53.684 { 00:19:53.684 "name": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:53.684 "aliases": [ 00:19:53.684 "lvs/nvme0n1p0" 00:19:53.684 ], 00:19:53.684 "product_name": "Logical Volume", 00:19:53.684 "block_size": 4096, 00:19:53.684 "num_blocks": 26476544, 00:19:53.684 "uuid": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:53.684 "assigned_rate_limits": { 00:19:53.684 "rw_ios_per_sec": 0, 00:19:53.684 "rw_mbytes_per_sec": 0, 00:19:53.684 "r_mbytes_per_sec": 0, 00:19:53.684 "w_mbytes_per_sec": 0 00:19:53.684 }, 00:19:53.684 "claimed": false, 00:19:53.684 "zoned": false, 00:19:53.684 "supported_io_types": { 00:19:53.684 "read": true, 00:19:53.684 "write": true, 00:19:53.684 "unmap": true, 00:19:53.684 "write_zeroes": true, 00:19:53.684 "flush": false, 00:19:53.684 "reset": true, 00:19:53.684 "compare": false, 00:19:53.684 "compare_and_write": false, 00:19:53.684 "abort": false, 00:19:53.684 "nvme_admin": false, 00:19:53.684 "nvme_io": false 00:19:53.684 }, 00:19:53.684 "driver_specific": { 00:19:53.684 "lvol": { 00:19:53.684 "lvol_store_uuid": "f1bb7d02-aedd-44cb-ba29-289bb8adb305", 00:19:53.684 "base_bdev": "nvme0n1", 00:19:53.684 "thin_provision": true, 00:19:53.684 "snapshot": false, 00:19:53.684 "clone": false, 00:19:53.684 "esnap_clone": false 00:19:53.684 } 00:19:53.684 } 00:19:53.684 } 00:19:53.684 ]' 00:19:53.684 18:06:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:53.684 18:06:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:53.684 18:06:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:53.684 18:06:10 -- common/autotest_common.sh@1373 -- # nb=26476544 00:19:53.684 18:06:10 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:19:53.684 18:06:10 -- common/autotest_common.sh@1377 -- # echo 103424 00:19:53.684 18:06:10 -- ftl/common.sh@41 -- # local base_size=5171 00:19:53.684 18:06:10 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:53.684 18:06:10 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:53.943 18:06:10 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:53.943 18:06:10 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:53.943 18:06:10 -- ftl/common.sh@48 -- # get_bdev_size dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.943 18:06:10 -- common/autotest_common.sh@1367 -- # local bdev_name=dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:53.943 18:06:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:53.943 18:06:10 -- common/autotest_common.sh@1369 -- # local bs 00:19:53.943 18:06:10 -- common/autotest_common.sh@1370 -- # local nb 00:19:54.201 18:06:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:54.465 18:06:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:54.465 { 00:19:54.465 "name": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:54.465 "aliases": [ 00:19:54.465 "lvs/nvme0n1p0" 00:19:54.465 ], 00:19:54.465 "product_name": "Logical Volume", 00:19:54.465 "block_size": 4096, 00:19:54.465 "num_blocks": 26476544, 00:19:54.465 "uuid": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:54.465 "assigned_rate_limits": { 00:19:54.465 "rw_ios_per_sec": 0, 00:19:54.465 "rw_mbytes_per_sec": 0, 00:19:54.465 "r_mbytes_per_sec": 0, 00:19:54.465 "w_mbytes_per_sec": 0 00:19:54.465 }, 00:19:54.465 "claimed": false, 00:19:54.465 "zoned": false, 00:19:54.465 "supported_io_types": { 00:19:54.465 "read": true, 00:19:54.465 "write": true, 00:19:54.465 "unmap": true, 00:19:54.465 "write_zeroes": true, 00:19:54.465 "flush": false, 00:19:54.465 "reset": true, 00:19:54.465 "compare": false, 00:19:54.465 "compare_and_write": false, 00:19:54.465 "abort": false, 00:19:54.465 "nvme_admin": false, 00:19:54.465 "nvme_io": false 00:19:54.465 }, 00:19:54.465 "driver_specific": { 00:19:54.465 "lvol": { 00:19:54.465 "lvol_store_uuid": "f1bb7d02-aedd-44cb-ba29-289bb8adb305", 00:19:54.465 "base_bdev": "nvme0n1", 00:19:54.465 "thin_provision": true, 00:19:54.465 "snapshot": false, 00:19:54.465 "clone": false, 00:19:54.465 "esnap_clone": false 00:19:54.465 } 00:19:54.465 } 00:19:54.465 } 00:19:54.465 ]' 00:19:54.465 18:06:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:54.465 18:06:11 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:54.465 18:06:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:54.465 18:06:11 -- common/autotest_common.sh@1373 -- # nb=26476544 00:19:54.465 18:06:11 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:19:54.465 18:06:11 -- common/autotest_common.sh@1377 -- # echo 103424 00:19:54.465 18:06:11 -- ftl/common.sh@48 -- # cache_size=5171 00:19:54.465 18:06:11 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:54.727 18:06:11 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:19:54.727 18:06:11 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:54.727 18:06:11 -- common/autotest_common.sh@1367 -- # local bdev_name=dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:54.727 18:06:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:54.727 18:06:11 -- common/autotest_common.sh@1369 -- # local bs 00:19:54.727 18:06:11 -- common/autotest_common.sh@1370 -- # local nb 00:19:54.728 18:06:11 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dba8648a-72ac-4c3d-aed7-3fd2ce1b911a 00:19:54.986 18:06:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:54.986 { 00:19:54.986 "name": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:54.986 "aliases": [ 00:19:54.986 "lvs/nvme0n1p0" 00:19:54.986 ], 00:19:54.986 "product_name": "Logical Volume", 00:19:54.986 "block_size": 4096, 00:19:54.986 "num_blocks": 26476544, 00:19:54.986 "uuid": "dba8648a-72ac-4c3d-aed7-3fd2ce1b911a", 00:19:54.986 "assigned_rate_limits": { 00:19:54.986 "rw_ios_per_sec": 0, 00:19:54.986 "rw_mbytes_per_sec": 0, 00:19:54.986 "r_mbytes_per_sec": 0, 00:19:54.986 "w_mbytes_per_sec": 0 00:19:54.986 }, 00:19:54.986 "claimed": false, 00:19:54.986 "zoned": false, 00:19:54.986 "supported_io_types": { 00:19:54.986 "read": true, 00:19:54.986 "write": true, 00:19:54.986 "unmap": true, 00:19:54.986 "write_zeroes": true, 00:19:54.986 "flush": false, 00:19:54.986 "reset": true, 00:19:54.986 "compare": false, 00:19:54.986 "compare_and_write": false, 00:19:54.986 "abort": false, 00:19:54.986 "nvme_admin": false, 00:19:54.986 "nvme_io": false 00:19:54.986 }, 00:19:54.986 "driver_specific": { 00:19:54.986 "lvol": { 00:19:54.986 "lvol_store_uuid": "f1bb7d02-aedd-44cb-ba29-289bb8adb305", 00:19:54.986 "base_bdev": "nvme0n1", 00:19:54.986 "thin_provision": true, 00:19:54.986 "snapshot": false, 00:19:54.986 "clone": false, 00:19:54.986 "esnap_clone": false 00:19:54.986 } 00:19:54.986 } 00:19:54.986 } 00:19:54.986 ]' 00:19:54.986 18:06:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:54.986 18:06:11 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:54.986 18:06:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:54.986 18:06:11 -- common/autotest_common.sh@1373 -- # nb=26476544 00:19:54.986 18:06:11 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:19:54.986 18:06:11 -- common/autotest_common.sh@1377 -- # echo 103424 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dba8648a-72ac-4c3d-aed7-3fd2ce1b911a --l2p_dram_limit 10' 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:54.986 18:06:11 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dba8648a-72ac-4c3d-aed7-3fd2ce1b911a --l2p_dram_limit 10 -c nvc0n1p0 00:19:55.245 [2024-11-26 18:06:11.973164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.245 [2024-11-26 18:06:11.973548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.245 [2024-11-26 18:06:11.973593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:55.245 [2024-11-26 18:06:11.973606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.245 [2024-11-26 18:06:11.973729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.245 [2024-11-26 18:06:11.973745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.245 [2024-11-26 18:06:11.973771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:55.245 [2024-11-26 18:06:11.973795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.245 [2024-11-26 18:06:11.973837] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.245 [2024-11-26 18:06:11.974243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.245 [2024-11-26 18:06:11.974273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.245 [2024-11-26 18:06:11.974285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.245 [2024-11-26 18:06:11.974301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:19:55.245 [2024-11-26 18:06:11.974313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.245 [2024-11-26 18:06:11.974424] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dab23966-cac7-4916-9c8e-e833bd8ea971 00:19:55.245 [2024-11-26 18:06:11.977064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.245 [2024-11-26 18:06:11.977208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:55.245 [2024-11-26 18:06:11.977233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:55.246 [2024-11-26 18:06:11.977259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.992018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.992098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.246 [2024-11-26 18:06:11.992118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.676 ms 00:19:55.246 [2024-11-26 18:06:11.992138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.992299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.992322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.246 [2024-11-26 18:06:11.992335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:55.246 [2024-11-26 18:06:11.992351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.992482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.992508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.246 [2024-11-26 18:06:11.992522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:55.246 [2024-11-26 18:06:11.992542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.992593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.246 [2024-11-26 18:06:11.995764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.995809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.246 [2024-11-26 18:06:11.995832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:19:55.246 [2024-11-26 18:06:11.995857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.995924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.995938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.246 [2024-11-26 18:06:11.995964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:55.246 [2024-11-26 18:06:11.995977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.996009] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:55.246 [2024-11-26 18:06:11.996170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:55.246 [2024-11-26 18:06:11.996197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.246 [2024-11-26 18:06:11.996215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:55.246 [2024-11-26 18:06:11.996239] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.246 [2024-11-26 18:06:11.996255] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.246 [2024-11-26 18:06:11.996273] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:55.246 [2024-11-26 18:06:11.996286] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.246 [2024-11-26 18:06:11.996303] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:55.246 [2024-11-26 18:06:11.996315] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:55.246 [2024-11-26 18:06:11.996345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.996358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.246 [2024-11-26 18:06:11.996374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:55.246 [2024-11-26 18:06:11.996394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.996491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.996506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.246 [2024-11-26 18:06:11.996522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:55.246 [2024-11-26 18:06:11.996534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:11.996654] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.246 [2024-11-26 18:06:11.996671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.246 [2024-11-26 18:06:11.996687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.246 [2024-11-26 18:06:11.996700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.246 [2024-11-26 18:06:11.996718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.246 [2024-11-26 18:06:11.996729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.246 [2024-11-26 18:06:11.996743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:55.246 [2024-11-26 18:06:11.996754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.246 [2024-11-26 18:06:11.996768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:55.246 [2024-11-26 18:06:11.996779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.246 [2024-11-26 18:06:11.996793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.246 [2024-11-26 18:06:11.996804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:55.246 [2024-11-26 18:06:11.996822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.246 [2024-11-26 18:06:11.996834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.246 [2024-11-26 18:06:11.996848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:55.246 [2024-11-26 18:06:11.996859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.246 [2024-11-26 18:06:11.996872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.246 [2024-11-26 18:06:11.996901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:55.246 [2024-11-26 18:06:11.996916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.246 [2024-11-26 18:06:11.996928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:55.246 [2024-11-26 18:06:11.996955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:55.246 [2024-11-26 18:06:11.996966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:55.246 [2024-11-26 18:06:11.996980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.246 [2024-11-26 18:06:11.996991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.246 [2024-11-26 18:06:11.997029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.246 [2024-11-26 18:06:11.997065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.246 [2024-11-26 18:06:11.997107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.246 [2024-11-26 18:06:11.997141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.246 [2024-11-26 18:06:11.997164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.246 [2024-11-26 18:06:11.997178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:55.246 [2024-11-26 18:06:11.997188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.246 [2024-11-26 18:06:11.997201] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.246 [2024-11-26 18:06:11.997213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.246 [2024-11-26 18:06:11.997232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.246 [2024-11-26 18:06:11.997263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.246 [2024-11-26 18:06:11.997275] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.246 [2024-11-26 18:06:11.997290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.246 [2024-11-26 18:06:11.997315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.246 [2024-11-26 18:06:11.997330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.246 [2024-11-26 18:06:11.997341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.246 [2024-11-26 18:06:11.997356] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.246 [2024-11-26 18:06:11.997373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.246 [2024-11-26 18:06:11.997390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:55.246 [2024-11-26 18:06:11.997403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:55.246 [2024-11-26 18:06:11.997421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:55.246 [2024-11-26 18:06:11.997434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:55.246 [2024-11-26 18:06:11.997449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:55.246 [2024-11-26 18:06:11.997461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:55.246 [2024-11-26 18:06:11.997476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:55.246 [2024-11-26 18:06:11.997507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:55.246 [2024-11-26 18:06:11.997527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:55.246 [2024-11-26 18:06:11.997539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:55.246 [2024-11-26 18:06:11.997554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:55.246 [2024-11-26 18:06:11.997566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:55.246 [2024-11-26 18:06:11.997581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:55.246 [2024-11-26 18:06:11.997593] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.246 [2024-11-26 18:06:11.997609] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.246 [2024-11-26 18:06:11.997623] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.246 [2024-11-26 18:06:11.997638] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.246 [2024-11-26 18:06:11.997650] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.246 [2024-11-26 18:06:11.997665] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.246 [2024-11-26 18:06:11.997678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:11.997693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.246 [2024-11-26 18:06:11.997706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:19:55.246 [2024-11-26 18:06:11.997720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.012884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.012972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.246 [2024-11-26 18:06:12.012994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.118 ms 00:19:55.246 [2024-11-26 18:06:12.013010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.013161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.013179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.246 [2024-11-26 18:06:12.013192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:55.246 [2024-11-26 18:06:12.013208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.034137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.034257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.246 [2024-11-26 18:06:12.034279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.862 ms 00:19:55.246 [2024-11-26 18:06:12.034304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.034389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.034408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.246 [2024-11-26 18:06:12.034422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.246 [2024-11-26 18:06:12.034439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.035351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.035379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.246 [2024-11-26 18:06:12.035393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:19:55.246 [2024-11-26 18:06:12.035409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.035566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.035589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.246 [2024-11-26 18:06:12.035602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:55.246 [2024-11-26 18:06:12.035617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.048379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.246 [2024-11-26 18:06:12.048496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.246 [2024-11-26 18:06:12.048518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.750 ms 00:19:55.246 [2024-11-26 18:06:12.048548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-26 18:06:12.061763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:55.247 [2024-11-26 18:06:12.067443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.247 [2024-11-26 18:06:12.067529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.247 [2024-11-26 18:06:12.067564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.736 ms 00:19:55.247 [2024-11-26 18:06:12.067577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.247 [2024-11-26 18:06:12.138544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.247 [2024-11-26 18:06:12.138871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:55.247 [2024-11-26 18:06:12.138917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.980 ms 00:19:55.247 [2024-11-26 18:06:12.138943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.247 [2024-11-26 18:06:12.139039] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:55.247 [2024-11-26 18:06:12.139059] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:58.568 [2024-11-26 18:06:15.070940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.071240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:58.568 [2024-11-26 18:06:15.071292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2936.653 ms 00:19:58.568 [2024-11-26 18:06:15.071306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.071539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.071555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:58.568 [2024-11-26 18:06:15.071570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:58.568 [2024-11-26 18:06:15.071582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.075270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.075319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:58.568 [2024-11-26 18:06:15.075350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:19:58.568 [2024-11-26 18:06:15.075362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.078499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.078540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:58.568 [2024-11-26 18:06:15.078560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:19:58.568 [2024-11-26 18:06:15.078571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.078754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.078769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.568 [2024-11-26 18:06:15.078784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:58.568 [2024-11-26 18:06:15.078796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.105769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.106060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:58.568 [2024-11-26 18:06:15.106094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.982 ms 00:19:58.568 [2024-11-26 18:06:15.106124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.111174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.111222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:58.568 [2024-11-26 18:06:15.111245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.991 ms 00:19:58.568 [2024-11-26 18:06:15.111258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.113737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.113771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:58.568 [2024-11-26 18:06:15.113787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:19:58.568 [2024-11-26 18:06:15.113798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.118015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.118055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.568 [2024-11-26 18:06:15.118072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:19:58.568 [2024-11-26 18:06:15.118099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.118151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.118165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.568 [2024-11-26 18:06:15.118194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:58.568 [2024-11-26 18:06:15.118206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.118290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 18:06:15.118304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.568 [2024-11-26 18:06:15.118322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:58.568 [2024-11-26 18:06:15.118333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 18:06:15.119536] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3151.083 ms, result 0 00:19:58.568 { 00:19:58.568 "name": "ftl0", 00:19:58.568 "uuid": "dab23966-cac7-4916-9c8e-e833bd8ea971" 00:19:58.568 } 00:19:58.568 18:06:15 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:19:58.568 18:06:15 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:58.568 18:06:15 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:19:58.568 18:06:15 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:19:58.568 18:06:15 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:19:58.828 /dev/nbd0 00:19:58.828 18:06:15 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:19:58.828 18:06:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:58.828 18:06:15 -- common/autotest_common.sh@867 -- # local i 00:19:58.828 18:06:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:58.828 18:06:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:58.828 18:06:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:58.828 18:06:15 -- common/autotest_common.sh@871 -- # break 00:19:58.828 18:06:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:58.828 18:06:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:58.828 18:06:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:19:58.828 1+0 records in 00:19:58.828 1+0 records out 00:19:58.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246043 s, 16.6 MB/s 00:19:58.828 18:06:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:58.828 18:06:15 -- common/autotest_common.sh@884 -- # size=4096 00:19:58.828 18:06:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:58.828 18:06:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:58.828 18:06:15 -- common/autotest_common.sh@887 -- # return 0 00:19:58.828 18:06:15 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:19:58.828 [2024-11-26 18:06:15.679253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:58.828 [2024-11-26 18:06:15.679629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86077 ] 00:19:59.086 [2024-11-26 18:06:15.831827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.086 [2024-11-26 18:06:15.879099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.023  [2024-11-26T18:06:18.369Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-26T18:06:19.310Z] Copying: 390/1024 [MB] (196 MBps) [2024-11-26T18:06:20.247Z] Copying: 586/1024 [MB] (196 MBps) [2024-11-26T18:06:21.183Z] Copying: 780/1024 [MB] (193 MBps) [2024-11-26T18:06:21.442Z] Copying: 973/1024 [MB] (192 MBps) [2024-11-26T18:06:21.442Z] Copying: 1024/1024 [MB] (average 194 MBps) 00:20:04.516 00:20:04.776 18:06:21 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:06.678 18:06:23 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:06.678 [2024-11-26 18:06:23.284329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:06.678 [2024-11-26 18:06:23.284482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86158 ] 00:20:06.678 [2024-11-26 18:06:23.435792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.678 [2024-11-26 18:06:23.484361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.054  [2024-11-26T18:06:25.692Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-26T18:06:26.639Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-26T18:06:27.574Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-26T18:06:28.953Z] Copying: 69/1024 [MB] (17 MBps) [2024-11-26T18:06:29.887Z] Copying: 87/1024 [MB] (17 MBps) [2024-11-26T18:06:30.824Z] Copying: 104/1024 [MB] (17 MBps) [2024-11-26T18:06:31.763Z] Copying: 122/1024 [MB] (17 MBps) [2024-11-26T18:06:32.702Z] Copying: 140/1024 [MB] (18 MBps) [2024-11-26T18:06:33.649Z] Copying: 158/1024 [MB] (17 MBps) [2024-11-26T18:06:34.581Z] Copying: 176/1024 [MB] (18 MBps) [2024-11-26T18:06:35.967Z] Copying: 193/1024 [MB] (17 MBps) [2024-11-26T18:06:36.534Z] Copying: 211/1024 [MB] (17 MBps) [2024-11-26T18:06:37.911Z] Copying: 228/1024 [MB] (17 MBps) [2024-11-26T18:06:38.847Z] Copying: 247/1024 [MB] (18 MBps) [2024-11-26T18:06:39.779Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-26T18:06:40.719Z] Copying: 282/1024 [MB] (17 MBps) [2024-11-26T18:06:41.701Z] Copying: 299/1024 [MB] (17 MBps) [2024-11-26T18:06:42.640Z] Copying: 316/1024 [MB] (17 MBps) [2024-11-26T18:06:43.576Z] Copying: 334/1024 [MB] (17 MBps) [2024-11-26T18:06:44.954Z] Copying: 351/1024 [MB] (17 MBps) [2024-11-26T18:06:45.520Z] Copying: 369/1024 [MB] (18 MBps) [2024-11-26T18:06:46.651Z] Copying: 387/1024 [MB] (18 MBps) [2024-11-26T18:06:47.584Z] Copying: 405/1024 [MB] (17 MBps) [2024-11-26T18:06:48.521Z] Copying: 423/1024 [MB] (18 MBps) [2024-11-26T18:06:49.895Z] Copying: 441/1024 [MB] (17 MBps) [2024-11-26T18:06:50.829Z] Copying: 459/1024 [MB] (17 MBps) [2024-11-26T18:06:51.768Z] Copying: 477/1024 [MB] (17 MBps) [2024-11-26T18:06:52.704Z] Copying: 495/1024 [MB] (17 MBps) [2024-11-26T18:06:53.718Z] Copying: 512/1024 [MB] (17 MBps) [2024-11-26T18:06:54.654Z] Copying: 530/1024 [MB] (17 MBps) [2024-11-26T18:06:55.593Z] Copying: 548/1024 [MB] (17 MBps) [2024-11-26T18:06:56.529Z] Copying: 565/1024 [MB] (17 MBps) [2024-11-26T18:06:57.905Z] Copying: 583/1024 [MB] (17 MBps) [2024-11-26T18:06:58.842Z] Copying: 600/1024 [MB] (17 MBps) [2024-11-26T18:06:59.781Z] Copying: 617/1024 [MB] (17 MBps) [2024-11-26T18:07:00.717Z] Copying: 634/1024 [MB] (16 MBps) [2024-11-26T18:07:01.653Z] Copying: 652/1024 [MB] (17 MBps) [2024-11-26T18:07:02.588Z] Copying: 670/1024 [MB] (17 MBps) [2024-11-26T18:07:03.525Z] Copying: 687/1024 [MB] (17 MBps) [2024-11-26T18:07:04.898Z] Copying: 704/1024 [MB] (17 MBps) [2024-11-26T18:07:05.832Z] Copying: 723/1024 [MB] (18 MBps) [2024-11-26T18:07:06.834Z] Copying: 741/1024 [MB] (18 MBps) [2024-11-26T18:07:07.780Z] Copying: 760/1024 [MB] (19 MBps) [2024-11-26T18:07:08.715Z] Copying: 779/1024 [MB] (18 MBps) [2024-11-26T18:07:09.652Z] Copying: 796/1024 [MB] (17 MBps) [2024-11-26T18:07:10.589Z] Copying: 815/1024 [MB] (18 MBps) [2024-11-26T18:07:11.525Z] Copying: 834/1024 [MB] (18 MBps) [2024-11-26T18:07:12.904Z] Copying: 852/1024 [MB] (18 MBps) [2024-11-26T18:07:13.470Z] Copying: 870/1024 [MB] (17 MBps) [2024-11-26T18:07:14.849Z] Copying: 888/1024 [MB] (18 MBps) [2024-11-26T18:07:15.785Z] Copying: 907/1024 [MB] (19 MBps) [2024-11-26T18:07:16.782Z] Copying: 926/1024 [MB] (18 MBps) [2024-11-26T18:07:17.765Z] Copying: 945/1024 [MB] (18 MBps) [2024-11-26T18:07:18.700Z] Copying: 964/1024 [MB] (19 MBps) [2024-11-26T18:07:19.636Z] Copying: 983/1024 [MB] (19 MBps) [2024-11-26T18:07:20.572Z] Copying: 1002/1024 [MB] (19 MBps) [2024-11-26T18:07:20.572Z] Copying: 1021/1024 [MB] (19 MBps) [2024-11-26T18:07:20.868Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:21:03.942 00:21:03.942 18:07:20 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:03.942 18:07:20 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:21:04.200 18:07:21 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:04.462 [2024-11-26 18:07:21.207348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.207416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.462 [2024-11-26 18:07:21.207433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.462 [2024-11-26 18:07:21.207447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.207492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.462 [2024-11-26 18:07:21.208182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.208201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.462 [2024-11-26 18:07:21.208216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:21:04.462 [2024-11-26 18:07:21.208226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.210181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.210233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.462 [2024-11-26 18:07:21.210253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:21:04.462 [2024-11-26 18:07:21.210264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.230390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.230647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.462 [2024-11-26 18:07:21.230684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.119 ms 00:21:04.462 [2024-11-26 18:07:21.230697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.235771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.235817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:04.462 [2024-11-26 18:07:21.235833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:21:04.462 [2024-11-26 18:07:21.235844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.237700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.237848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.462 [2024-11-26 18:07:21.237874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:21:04.462 [2024-11-26 18:07:21.237884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.242829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.242874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.462 [2024-11-26 18:07:21.242891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:21:04.462 [2024-11-26 18:07:21.242906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.243027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.243040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.462 [2024-11-26 18:07:21.243054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:04.462 [2024-11-26 18:07:21.243065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.245051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.245199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:04.462 [2024-11-26 18:07:21.245225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:21:04.462 [2024-11-26 18:07:21.245236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.246721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.246752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:04.462 [2024-11-26 18:07:21.246767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:21:04.462 [2024-11-26 18:07:21.246776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.247974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.248009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.462 [2024-11-26 18:07:21.248024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:21:04.462 [2024-11-26 18:07:21.248033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.249334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.462 [2024-11-26 18:07:21.249480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.462 [2024-11-26 18:07:21.249627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:21:04.462 [2024-11-26 18:07:21.249667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.462 [2024-11-26 18:07:21.249727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.462 [2024-11-26 18:07:21.249913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.249996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.462 [2024-11-26 18:07:21.250336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.250999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.463 [2024-11-26 18:07:21.251154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.463 [2024-11-26 18:07:21.251166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dab23966-cac7-4916-9c8e-e833bd8ea971 00:21:04.463 [2024-11-26 18:07:21.251177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.463 [2024-11-26 18:07:21.251193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.463 [2024-11-26 18:07:21.251203] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.463 [2024-11-26 18:07:21.251216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.463 [2024-11-26 18:07:21.251226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.463 [2024-11-26 18:07:21.251241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.463 [2024-11-26 18:07:21.251251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.463 [2024-11-26 18:07:21.251264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.463 [2024-11-26 18:07:21.251273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.463 [2024-11-26 18:07:21.251286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.463 [2024-11-26 18:07:21.251296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.463 [2024-11-26 18:07:21.251309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:21:04.463 [2024-11-26 18:07:21.251319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.463 [2024-11-26 18:07:21.253097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.463 [2024-11-26 18:07:21.253118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.463 [2024-11-26 18:07:21.253132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:21:04.463 [2024-11-26 18:07:21.253142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.463 [2024-11-26 18:07:21.253213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.463 [2024-11-26 18:07:21.253224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.463 [2024-11-26 18:07:21.253237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:04.463 [2024-11-26 18:07:21.253246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.463 [2024-11-26 18:07:21.260331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.463 [2024-11-26 18:07:21.260371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.463 [2024-11-26 18:07:21.260387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.463 [2024-11-26 18:07:21.260398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.463 [2024-11-26 18:07:21.260462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.463 [2024-11-26 18:07:21.260474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.464 [2024-11-26 18:07:21.260487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.260497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.260587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.260601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.464 [2024-11-26 18:07:21.260616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.260626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.260648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.260659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.464 [2024-11-26 18:07:21.260672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.260682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.274889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.274945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.464 [2024-11-26 18:07:21.274963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.274982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.279655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.279690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.464 [2024-11-26 18:07:21.279705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.279716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.279788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.279804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.464 [2024-11-26 18:07:21.279826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.279836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.279877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.279888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.464 [2024-11-26 18:07:21.279901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.279917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.280001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.280014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.464 [2024-11-26 18:07:21.280031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.280041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.280086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.280098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.464 [2024-11-26 18:07:21.280112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.280122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.280167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.280178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.464 [2024-11-26 18:07:21.280194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.280204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.280254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.464 [2024-11-26 18:07:21.280265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.464 [2024-11-26 18:07:21.280278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.464 [2024-11-26 18:07:21.280288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.464 [2024-11-26 18:07:21.280431] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.160 ms, result 0 00:21:04.464 true 00:21:04.464 18:07:21 -- ftl/dirty_shutdown.sh@83 -- # kill -9 85940 00:21:04.464 18:07:21 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid85940 00:21:04.464 18:07:21 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:04.723 [2024-11-26 18:07:21.395898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:04.723 [2024-11-26 18:07:21.396420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86755 ] 00:21:04.723 [2024-11-26 18:07:21.548750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.723 [2024-11-26 18:07:21.593297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.100  [2024-11-26T18:07:23.962Z] Copying: 198/1024 [MB] (198 MBps) [2024-11-26T18:07:24.908Z] Copying: 396/1024 [MB] (198 MBps) [2024-11-26T18:07:25.871Z] Copying: 594/1024 [MB] (197 MBps) [2024-11-26T18:07:26.809Z] Copying: 791/1024 [MB] (197 MBps) [2024-11-26T18:07:27.069Z] Copying: 987/1024 [MB] (195 MBps) [2024-11-26T18:07:27.069Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:21:10.143 00:21:10.403 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 85940 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:10.403 18:07:27 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:10.403 [2024-11-26 18:07:27.156674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:10.403 [2024-11-26 18:07:27.157033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86823 ] 00:21:10.403 [2024-11-26 18:07:27.308647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.662 [2024-11-26 18:07:27.355215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.662 [2024-11-26 18:07:27.457095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.662 [2024-11-26 18:07:27.457189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.662 [2024-11-26 18:07:27.518681] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:10.662 [2024-11-26 18:07:27.518939] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:10.662 [2024-11-26 18:07:27.519155] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:10.922 [2024-11-26 18:07:27.808519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.922 [2024-11-26 18:07:27.808588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:10.922 [2024-11-26 18:07:27.808604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:10.922 [2024-11-26 18:07:27.808615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.922 [2024-11-26 18:07:27.808689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.922 [2024-11-26 18:07:27.808703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.922 [2024-11-26 18:07:27.808714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:10.922 [2024-11-26 18:07:27.808724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.922 [2024-11-26 18:07:27.808745] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:10.922 [2024-11-26 18:07:27.808990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:10.922 [2024-11-26 18:07:27.809010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.922 [2024-11-26 18:07:27.809020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.922 [2024-11-26 18:07:27.809031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:21:10.922 [2024-11-26 18:07:27.809050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.922 [2024-11-26 18:07:27.810489] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:10.922 [2024-11-26 18:07:27.813042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.922 [2024-11-26 18:07:27.813078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:10.922 [2024-11-26 18:07:27.813092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:21:10.922 [2024-11-26 18:07:27.813102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.922 [2024-11-26 18:07:27.813168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.813179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:10.923 [2024-11-26 18:07:27.813191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:10.923 [2024-11-26 18:07:27.813201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.819830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.819862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.923 [2024-11-26 18:07:27.819874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.579 ms 00:21:10.923 [2024-11-26 18:07:27.819884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.819977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.819989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.923 [2024-11-26 18:07:27.820000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:10.923 [2024-11-26 18:07:27.820020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.820076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.820095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:10.923 [2024-11-26 18:07:27.820106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:10.923 [2024-11-26 18:07:27.820115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.820154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:10.923 [2024-11-26 18:07:27.821816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.821845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.923 [2024-11-26 18:07:27.821857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:21:10.923 [2024-11-26 18:07:27.821871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.821912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.821930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:10.923 [2024-11-26 18:07:27.821940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:10.923 [2024-11-26 18:07:27.821957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.821979] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:10.923 [2024-11-26 18:07:27.822002] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:10.923 [2024-11-26 18:07:27.822036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:10.923 [2024-11-26 18:07:27.822056] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:10.923 [2024-11-26 18:07:27.822123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:10.923 [2024-11-26 18:07:27.822139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:10.923 [2024-11-26 18:07:27.822152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:10.923 [2024-11-26 18:07:27.822165] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822177] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822195] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:10.923 [2024-11-26 18:07:27.822205] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:10.923 [2024-11-26 18:07:27.822216] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:10.923 [2024-11-26 18:07:27.822233] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:10.923 [2024-11-26 18:07:27.822248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.822258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:10.923 [2024-11-26 18:07:27.822268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:10.923 [2024-11-26 18:07:27.822277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.822331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.923 [2024-11-26 18:07:27.822351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:10.923 [2024-11-26 18:07:27.822361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:10.923 [2024-11-26 18:07:27.822370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.923 [2024-11-26 18:07:27.822438] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:10.923 [2024-11-26 18:07:27.822470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:10.923 [2024-11-26 18:07:27.822482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:10.923 [2024-11-26 18:07:27.822519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:10.923 [2024-11-26 18:07:27.822548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.923 [2024-11-26 18:07:27.822566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:10.923 [2024-11-26 18:07:27.822579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:10.923 [2024-11-26 18:07:27.822589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.923 [2024-11-26 18:07:27.822598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:10.923 [2024-11-26 18:07:27.822607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:10.923 [2024-11-26 18:07:27.822616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:10.923 [2024-11-26 18:07:27.822640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:10.923 [2024-11-26 18:07:27.822650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:10.923 [2024-11-26 18:07:27.822668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:10.923 [2024-11-26 18:07:27.822677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:10.923 [2024-11-26 18:07:27.822695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:10.923 [2024-11-26 18:07:27.822722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:10.923 [2024-11-26 18:07:27.822749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:10.923 [2024-11-26 18:07:27.822784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:10.923 [2024-11-26 18:07:27.822794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:10.923 [2024-11-26 18:07:27.822803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:10.923 [2024-11-26 18:07:27.822812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:10.924 [2024-11-26 18:07:27.822820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.924 [2024-11-26 18:07:27.822829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:10.924 [2024-11-26 18:07:27.822839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:10.924 [2024-11-26 18:07:27.822847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.924 [2024-11-26 18:07:27.822856] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:10.924 [2024-11-26 18:07:27.822866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:10.924 [2024-11-26 18:07:27.822875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.924 [2024-11-26 18:07:27.822887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.924 [2024-11-26 18:07:27.822897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:10.924 [2024-11-26 18:07:27.822906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:10.924 [2024-11-26 18:07:27.822915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:10.924 [2024-11-26 18:07:27.822924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:10.924 [2024-11-26 18:07:27.822936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:10.924 [2024-11-26 18:07:27.822946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:10.924 [2024-11-26 18:07:27.822956] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:10.924 [2024-11-26 18:07:27.822968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.924 [2024-11-26 18:07:27.822986] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:10.924 [2024-11-26 18:07:27.822997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:10.924 [2024-11-26 18:07:27.823007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:10.924 [2024-11-26 18:07:27.823017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:10.924 [2024-11-26 18:07:27.823027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:10.924 [2024-11-26 18:07:27.823037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:10.924 [2024-11-26 18:07:27.823047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:10.924 [2024-11-26 18:07:27.823057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:10.924 [2024-11-26 18:07:27.823067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:10.924 [2024-11-26 18:07:27.823077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:10.924 [2024-11-26 18:07:27.823087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:10.924 [2024-11-26 18:07:27.823097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:10.924 [2024-11-26 18:07:27.823110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:10.924 [2024-11-26 18:07:27.823121] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:10.924 [2024-11-26 18:07:27.823142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.924 [2024-11-26 18:07:27.823159] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:10.924 [2024-11-26 18:07:27.823170] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:10.924 [2024-11-26 18:07:27.823180] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:10.924 [2024-11-26 18:07:27.823200] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:10.924 [2024-11-26 18:07:27.823211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.924 [2024-11-26 18:07:27.823221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:10.924 [2024-11-26 18:07:27.823232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:21:10.924 [2024-11-26 18:07:27.823251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.924 [2024-11-26 18:07:27.831638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.924 [2024-11-26 18:07:27.831840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.924 [2024-11-26 18:07:27.831873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.356 ms 00:21:10.924 [2024-11-26 18:07:27.831885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.924 [2024-11-26 18:07:27.831979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.924 [2024-11-26 18:07:27.831990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:10.924 [2024-11-26 18:07:27.832009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:10.924 [2024-11-26 18:07:27.832019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.853412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.853748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.184 [2024-11-26 18:07:27.853783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.361 ms 00:21:11.184 [2024-11-26 18:07:27.853806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.853885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.853914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.184 [2024-11-26 18:07:27.853929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.184 [2024-11-26 18:07:27.853943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.854518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.854539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.184 [2024-11-26 18:07:27.854554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:21:11.184 [2024-11-26 18:07:27.854568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.854726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.854749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.184 [2024-11-26 18:07:27.854764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:21:11.184 [2024-11-26 18:07:27.854778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.863036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.863078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.184 [2024-11-26 18:07:27.863093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.242 ms 00:21:11.184 [2024-11-26 18:07:27.863103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.865704] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:11.184 [2024-11-26 18:07:27.865737] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:11.184 [2024-11-26 18:07:27.865751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.865762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:11.184 [2024-11-26 18:07:27.865773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:21:11.184 [2024-11-26 18:07:27.865782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.878595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.878641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:11.184 [2024-11-26 18:07:27.878655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.790 ms 00:21:11.184 [2024-11-26 18:07:27.878677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.880949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.881095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:11.184 [2024-11-26 18:07:27.881117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:21:11.184 [2024-11-26 18:07:27.881127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.882718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.882753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:11.184 [2024-11-26 18:07:27.882766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:21:11.184 [2024-11-26 18:07:27.882776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.883000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.883016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.184 [2024-11-26 18:07:27.883028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:11.184 [2024-11-26 18:07:27.883040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.908051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.908304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:11.184 [2024-11-26 18:07:27.908330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.030 ms 00:21:11.184 [2024-11-26 18:07:27.908343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.184 [2024-11-26 18:07:27.914764] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:11.184 [2024-11-26 18:07:27.917973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.184 [2024-11-26 18:07:27.918089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.184 [2024-11-26 18:07:27.918111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.571 ms 00:21:11.185 [2024-11-26 18:07:27.918133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.918235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.918259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:11.185 [2024-11-26 18:07:27.918274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.185 [2024-11-26 18:07:27.918284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.918343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.918355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.185 [2024-11-26 18:07:27.918366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:11.185 [2024-11-26 18:07:27.918376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.920634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.920655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:11.185 [2024-11-26 18:07:27.920673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:21:11.185 [2024-11-26 18:07:27.920688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.920716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.920735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.185 [2024-11-26 18:07:27.920746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.185 [2024-11-26 18:07:27.920755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.920812] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:11.185 [2024-11-26 18:07:27.920825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.920835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:11.185 [2024-11-26 18:07:27.920845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:11.185 [2024-11-26 18:07:27.920854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.924771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.924902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.185 [2024-11-26 18:07:27.924977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.897 ms 00:21:11.185 [2024-11-26 18:07:27.925014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.925113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.185 [2024-11-26 18:07:27.925152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.185 [2024-11-26 18:07:27.925234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:11.185 [2024-11-26 18:07:27.925274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.185 [2024-11-26 18:07:27.926364] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.625 ms, result 0 00:21:12.121  [2024-11-26T18:07:29.982Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T18:07:31.358Z] Copying: 53/1024 [MB] (25 MBps) [2024-11-26T18:07:32.292Z] Copying: 78/1024 [MB] (25 MBps) [2024-11-26T18:07:33.228Z] Copying: 103/1024 [MB] (24 MBps) [2024-11-26T18:07:34.161Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-26T18:07:35.098Z] Copying: 153/1024 [MB] (24 MBps) [2024-11-26T18:07:36.035Z] Copying: 178/1024 [MB] (24 MBps) [2024-11-26T18:07:36.968Z] Copying: 204/1024 [MB] (26 MBps) [2024-11-26T18:07:38.343Z] Copying: 231/1024 [MB] (26 MBps) [2024-11-26T18:07:39.280Z] Copying: 257/1024 [MB] (26 MBps) [2024-11-26T18:07:40.215Z] Copying: 284/1024 [MB] (26 MBps) [2024-11-26T18:07:41.150Z] Copying: 310/1024 [MB] (25 MBps) [2024-11-26T18:07:42.085Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-26T18:07:43.021Z] Copying: 366/1024 [MB] (30 MBps) [2024-11-26T18:07:44.017Z] Copying: 393/1024 [MB] (27 MBps) [2024-11-26T18:07:44.953Z] Copying: 419/1024 [MB] (26 MBps) [2024-11-26T18:07:46.330Z] Copying: 447/1024 [MB] (27 MBps) [2024-11-26T18:07:47.265Z] Copying: 473/1024 [MB] (26 MBps) [2024-11-26T18:07:48.200Z] Copying: 499/1024 [MB] (25 MBps) [2024-11-26T18:07:49.136Z] Copying: 526/1024 [MB] (27 MBps) [2024-11-26T18:07:50.077Z] Copying: 553/1024 [MB] (27 MBps) [2024-11-26T18:07:51.042Z] Copying: 580/1024 [MB] (26 MBps) [2024-11-26T18:07:51.975Z] Copying: 605/1024 [MB] (25 MBps) [2024-11-26T18:07:52.910Z] Copying: 631/1024 [MB] (25 MBps) [2024-11-26T18:07:54.285Z] Copying: 656/1024 [MB] (25 MBps) [2024-11-26T18:07:55.220Z] Copying: 683/1024 [MB] (26 MBps) [2024-11-26T18:07:56.155Z] Copying: 709/1024 [MB] (25 MBps) [2024-11-26T18:07:57.092Z] Copying: 734/1024 [MB] (25 MBps) [2024-11-26T18:07:58.033Z] Copying: 760/1024 [MB] (26 MBps) [2024-11-26T18:07:58.970Z] Copying: 786/1024 [MB] (25 MBps) [2024-11-26T18:07:59.906Z] Copying: 811/1024 [MB] (24 MBps) [2024-11-26T18:08:01.285Z] Copying: 836/1024 [MB] (24 MBps) [2024-11-26T18:08:02.223Z] Copying: 861/1024 [MB] (25 MBps) [2024-11-26T18:08:03.161Z] Copying: 886/1024 [MB] (25 MBps) [2024-11-26T18:08:04.117Z] Copying: 911/1024 [MB] (24 MBps) [2024-11-26T18:08:05.055Z] Copying: 936/1024 [MB] (24 MBps) [2024-11-26T18:08:05.992Z] Copying: 960/1024 [MB] (24 MBps) [2024-11-26T18:08:06.929Z] Copying: 985/1024 [MB] (25 MBps) [2024-11-26T18:08:08.308Z] Copying: 1012/1024 [MB] (26 MBps) [2024-11-26T18:08:08.308Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-26T18:08:08.308Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-26 18:08:08.027866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.382 [2024-11-26 18:08:08.027928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.382 [2024-11-26 18:08:08.027957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:51.382 [2024-11-26 18:08:08.027967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.382 [2024-11-26 18:08:08.029201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.382 [2024-11-26 18:08:08.030116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.382 [2024-11-26 18:08:08.030144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.382 [2024-11-26 18:08:08.030164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:21:51.382 [2024-11-26 18:08:08.030187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.382 [2024-11-26 18:08:08.040576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.382 [2024-11-26 18:08:08.040614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.382 [2024-11-26 18:08:08.040638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.448 ms 00:21:51.382 [2024-11-26 18:08:08.040648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.382 [2024-11-26 18:08:08.063345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.382 [2024-11-26 18:08:08.063384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.382 [2024-11-26 18:08:08.063399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.714 ms 00:21:51.382 [2024-11-26 18:08:08.063416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.382 [2024-11-26 18:08:08.068619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.382 [2024-11-26 18:08:08.068651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:51.382 [2024-11-26 18:08:08.068674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:21:51.383 [2024-11-26 18:08:08.068685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.070470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.070501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.383 [2024-11-26 18:08:08.070512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:21:51.383 [2024-11-26 18:08:08.070521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.074378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.074410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.383 [2024-11-26 18:08:08.074433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:21:51.383 [2024-11-26 18:08:08.074471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.161306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.161379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.383 [2024-11-26 18:08:08.161409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.937 ms 00:21:51.383 [2024-11-26 18:08:08.161421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.163699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.163738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:51.383 [2024-11-26 18:08:08.163751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:21:51.383 [2024-11-26 18:08:08.163760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.165278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.165311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:51.383 [2024-11-26 18:08:08.165322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:21:51.383 [2024-11-26 18:08:08.165331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.166405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.166438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.383 [2024-11-26 18:08:08.166449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:21:51.383 [2024-11-26 18:08:08.166475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.167600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.383 [2024-11-26 18:08:08.167627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.383 [2024-11-26 18:08:08.167638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:21:51.383 [2024-11-26 18:08:08.167648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.383 [2024-11-26 18:08:08.167674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.383 [2024-11-26 18:08:08.167691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110592 / 261120 wr_cnt: 1 state: open 00:21:51.383 [2024-11-26 18:08:08.167704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.167996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.383 [2024-11-26 18:08:08.168388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.384 [2024-11-26 18:08:08.168775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.384 [2024-11-26 18:08:08.168785] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dab23966-cac7-4916-9c8e-e833bd8ea971 00:21:51.384 [2024-11-26 18:08:08.168800] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110592 00:21:51.384 [2024-11-26 18:08:08.168810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111552 00:21:51.384 [2024-11-26 18:08:08.168819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110592 00:21:51.384 [2024-11-26 18:08:08.168830] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:21:51.384 [2024-11-26 18:08:08.168839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.384 [2024-11-26 18:08:08.168849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.384 [2024-11-26 18:08:08.168859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.384 [2024-11-26 18:08:08.168868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.384 [2024-11-26 18:08:08.168876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.384 [2024-11-26 18:08:08.168886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.384 [2024-11-26 18:08:08.168896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.384 [2024-11-26 18:08:08.168907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:21:51.384 [2024-11-26 18:08:08.168924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.170626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.384 [2024-11-26 18:08:08.170648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.384 [2024-11-26 18:08:08.170659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:21:51.384 [2024-11-26 18:08:08.170678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.170789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.384 [2024-11-26 18:08:08.170802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.384 [2024-11-26 18:08:08.170813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:51.384 [2024-11-26 18:08:08.170826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.177705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.177732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.384 [2024-11-26 18:08:08.177745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.177755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.177806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.177817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.384 [2024-11-26 18:08:08.177828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.177843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.177902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.177916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.384 [2024-11-26 18:08:08.177926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.177936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.177953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.177963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.384 [2024-11-26 18:08:08.177973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.177982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.191643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.191695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.384 [2024-11-26 18:08:08.191709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.191720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.196641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.196677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.384 [2024-11-26 18:08:08.196690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.196707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.196775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.196788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.384 [2024-11-26 18:08:08.196799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.196809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.196839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.196851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.384 [2024-11-26 18:08:08.196860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.196870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.196960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.196979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.384 [2024-11-26 18:08:08.196996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.197006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.197042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.197055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.384 [2024-11-26 18:08:08.197073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.197089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.197132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.197147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.384 [2024-11-26 18:08:08.197157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.384 [2024-11-26 18:08:08.197167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.384 [2024-11-26 18:08:08.197215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.384 [2024-11-26 18:08:08.197226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.385 [2024-11-26 18:08:08.197243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.385 [2024-11-26 18:08:08.197253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.385 [2024-11-26 18:08:08.197369] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 171.387 ms, result 0 00:21:52.320 00:21:52.320 00:21:52.320 18:08:09 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:54.282 18:08:10 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:54.282 [2024-11-26 18:08:10.990879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:54.282 [2024-11-26 18:08:10.991011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87270 ] 00:21:54.282 [2024-11-26 18:08:11.141552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.282 [2024-11-26 18:08:11.187420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.543 [2024-11-26 18:08:11.289201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.543 [2024-11-26 18:08:11.289292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.543 [2024-11-26 18:08:11.441524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.543 [2024-11-26 18:08:11.441573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:54.543 [2024-11-26 18:08:11.441591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:54.543 [2024-11-26 18:08:11.441602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.543 [2024-11-26 18:08:11.441659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.543 [2024-11-26 18:08:11.441672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.543 [2024-11-26 18:08:11.441683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:54.543 [2024-11-26 18:08:11.441692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.543 [2024-11-26 18:08:11.441722] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:54.543 [2024-11-26 18:08:11.442051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:54.543 [2024-11-26 18:08:11.442093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.543 [2024-11-26 18:08:11.442107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.544 [2024-11-26 18:08:11.442124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:21:54.544 [2024-11-26 18:08:11.442140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.443579] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:54.544 [2024-11-26 18:08:11.446107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.446140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:54.544 [2024-11-26 18:08:11.446152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:21:54.544 [2024-11-26 18:08:11.446167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.446235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.446247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:54.544 [2024-11-26 18:08:11.446258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:54.544 [2024-11-26 18:08:11.446267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.452928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.452956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.544 [2024-11-26 18:08:11.452968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.618 ms 00:21:54.544 [2024-11-26 18:08:11.452986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.453065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.453082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.544 [2024-11-26 18:08:11.453093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:54.544 [2024-11-26 18:08:11.453102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.453157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.453168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:54.544 [2024-11-26 18:08:11.453179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:54.544 [2024-11-26 18:08:11.453192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.453222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:54.544 [2024-11-26 18:08:11.454891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.454918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.544 [2024-11-26 18:08:11.454929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:21:54.544 [2024-11-26 18:08:11.454939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.454983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.454994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:54.544 [2024-11-26 18:08:11.455004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:54.544 [2024-11-26 18:08:11.455017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.455039] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:54.544 [2024-11-26 18:08:11.455063] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:54.544 [2024-11-26 18:08:11.455096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:54.544 [2024-11-26 18:08:11.455113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:54.544 [2024-11-26 18:08:11.455176] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:54.544 [2024-11-26 18:08:11.455188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:54.544 [2024-11-26 18:08:11.455214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:54.544 [2024-11-26 18:08:11.455230] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455242] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455252] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:54.544 [2024-11-26 18:08:11.455262] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:54.544 [2024-11-26 18:08:11.455271] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:54.544 [2024-11-26 18:08:11.455281] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:54.544 [2024-11-26 18:08:11.455291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.455300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:54.544 [2024-11-26 18:08:11.455310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:21:54.544 [2024-11-26 18:08:11.455323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.455384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.544 [2024-11-26 18:08:11.455394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:54.544 [2024-11-26 18:08:11.455404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:54.544 [2024-11-26 18:08:11.455413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.544 [2024-11-26 18:08:11.455497] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:54.544 [2024-11-26 18:08:11.455518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:54.544 [2024-11-26 18:08:11.455529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:54.544 [2024-11-26 18:08:11.455559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:54.544 [2024-11-26 18:08:11.455586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.544 [2024-11-26 18:08:11.455605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:54.544 [2024-11-26 18:08:11.455614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:54.544 [2024-11-26 18:08:11.455622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.544 [2024-11-26 18:08:11.455631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:54.544 [2024-11-26 18:08:11.455640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:54.544 [2024-11-26 18:08:11.455649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:54.544 [2024-11-26 18:08:11.455669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:54.544 [2024-11-26 18:08:11.455678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:54.544 [2024-11-26 18:08:11.455696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:54.544 [2024-11-26 18:08:11.455705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:54.544 [2024-11-26 18:08:11.455723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:54.544 [2024-11-26 18:08:11.455749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:54.544 [2024-11-26 18:08:11.455775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:54.544 [2024-11-26 18:08:11.455801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:54.544 [2024-11-26 18:08:11.455830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.544 [2024-11-26 18:08:11.455849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:54.544 [2024-11-26 18:08:11.455858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:54.544 [2024-11-26 18:08:11.455866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.544 [2024-11-26 18:08:11.455875] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:54.544 [2024-11-26 18:08:11.455888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:54.544 [2024-11-26 18:08:11.455897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.544 [2024-11-26 18:08:11.455907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.544 [2024-11-26 18:08:11.455916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:54.544 [2024-11-26 18:08:11.455925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:54.544 [2024-11-26 18:08:11.455934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:54.544 [2024-11-26 18:08:11.455943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:54.544 [2024-11-26 18:08:11.455952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:54.545 [2024-11-26 18:08:11.455964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:54.545 [2024-11-26 18:08:11.455974] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:54.545 [2024-11-26 18:08:11.455993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.545 [2024-11-26 18:08:11.456005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:54.545 [2024-11-26 18:08:11.456015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:54.545 [2024-11-26 18:08:11.456025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:54.545 [2024-11-26 18:08:11.456036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:54.545 [2024-11-26 18:08:11.456046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:54.545 [2024-11-26 18:08:11.456056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:54.545 [2024-11-26 18:08:11.456066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:54.545 [2024-11-26 18:08:11.456076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:54.545 [2024-11-26 18:08:11.456086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:54.545 [2024-11-26 18:08:11.456095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:54.545 [2024-11-26 18:08:11.456105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:54.545 [2024-11-26 18:08:11.456115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:54.545 [2024-11-26 18:08:11.456125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:54.545 [2024-11-26 18:08:11.456138] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:54.545 [2024-11-26 18:08:11.456149] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.545 [2024-11-26 18:08:11.456160] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.545 [2024-11-26 18:08:11.456171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:54.545 [2024-11-26 18:08:11.456181] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:54.545 [2024-11-26 18:08:11.456191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:54.545 [2024-11-26 18:08:11.456201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.545 [2024-11-26 18:08:11.456212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:54.545 [2024-11-26 18:08:11.456221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:21:54.545 [2024-11-26 18:08:11.456242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.545 [2024-11-26 18:08:11.464665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.545 [2024-11-26 18:08:11.464692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:54.545 [2024-11-26 18:08:11.464713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.391 ms 00:21:54.545 [2024-11-26 18:08:11.464723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.545 [2024-11-26 18:08:11.464799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.545 [2024-11-26 18:08:11.464810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:54.545 [2024-11-26 18:08:11.464820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:54.545 [2024-11-26 18:08:11.464831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.486519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.486576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.805 [2024-11-26 18:08:11.486597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.665 ms 00:21:54.805 [2024-11-26 18:08:11.486612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.486674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.486691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.805 [2024-11-26 18:08:11.486707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:54.805 [2024-11-26 18:08:11.486726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.487260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.487280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.805 [2024-11-26 18:08:11.487296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:21:54.805 [2024-11-26 18:08:11.487311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.487482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.487503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.805 [2024-11-26 18:08:11.487519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:21:54.805 [2024-11-26 18:08:11.487533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.495500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.495534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.805 [2024-11-26 18:08:11.495551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.943 ms 00:21:54.805 [2024-11-26 18:08:11.495562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.498145] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:54.805 [2024-11-26 18:08:11.498177] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:54.805 [2024-11-26 18:08:11.498202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.498214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:54.805 [2024-11-26 18:08:11.498225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:21:54.805 [2024-11-26 18:08:11.498234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.511012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.511048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:54.805 [2024-11-26 18:08:11.511063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:21:54.805 [2024-11-26 18:08:11.511084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.512738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.512768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:54.805 [2024-11-26 18:08:11.512780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:21:54.805 [2024-11-26 18:08:11.512789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.514401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.514431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:54.805 [2024-11-26 18:08:11.514442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:21:54.805 [2024-11-26 18:08:11.514463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.514647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.514662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:54.805 [2024-11-26 18:08:11.514673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:54.805 [2024-11-26 18:08:11.514683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.537715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.537758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:54.805 [2024-11-26 18:08:11.537784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.048 ms 00:21:54.805 [2024-11-26 18:08:11.537795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.544003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:54.805 [2024-11-26 18:08:11.546887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.546915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:54.805 [2024-11-26 18:08:11.546927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.064 ms 00:21:54.805 [2024-11-26 18:08:11.546937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.547011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.547024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:54.805 [2024-11-26 18:08:11.547044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:54.805 [2024-11-26 18:08:11.547058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.548417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.548473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:54.805 [2024-11-26 18:08:11.548485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:21:54.805 [2024-11-26 18:08:11.548495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.550697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.550725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:54.805 [2024-11-26 18:08:11.550736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:21:54.805 [2024-11-26 18:08:11.550746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.550793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.550804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:54.805 [2024-11-26 18:08:11.550815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:54.805 [2024-11-26 18:08:11.550828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.550869] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:54.805 [2024-11-26 18:08:11.550885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.550904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:54.805 [2024-11-26 18:08:11.550914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:54.805 [2024-11-26 18:08:11.550924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.554724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.554758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:54.805 [2024-11-26 18:08:11.554771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.786 ms 00:21:54.805 [2024-11-26 18:08:11.554788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.554854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.805 [2024-11-26 18:08:11.554865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:54.805 [2024-11-26 18:08:11.554881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:54.805 [2024-11-26 18:08:11.554890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.805 [2024-11-26 18:08:11.560316] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.177 ms, result 0 00:21:56.184  [2024-11-26T18:08:14.047Z] Copying: 1220/1048576 [kB] (1220 kBps) [2024-11-26T18:08:14.998Z] Copying: 9480/1048576 [kB] (8260 kBps) [2024-11-26T18:08:15.934Z] Copying: 44/1024 [MB] (35 MBps) [2024-11-26T18:08:16.872Z] Copying: 81/1024 [MB] (36 MBps) [2024-11-26T18:08:17.809Z] Copying: 117/1024 [MB] (36 MBps) [2024-11-26T18:08:19.187Z] Copying: 154/1024 [MB] (36 MBps) [2024-11-26T18:08:20.206Z] Copying: 189/1024 [MB] (35 MBps) [2024-11-26T18:08:20.777Z] Copying: 222/1024 [MB] (32 MBps) [2024-11-26T18:08:22.155Z] Copying: 256/1024 [MB] (33 MBps) [2024-11-26T18:08:23.094Z] Copying: 292/1024 [MB] (36 MBps) [2024-11-26T18:08:24.031Z] Copying: 330/1024 [MB] (37 MBps) [2024-11-26T18:08:24.969Z] Copying: 366/1024 [MB] (36 MBps) [2024-11-26T18:08:25.931Z] Copying: 401/1024 [MB] (35 MBps) [2024-11-26T18:08:26.867Z] Copying: 436/1024 [MB] (34 MBps) [2024-11-26T18:08:27.805Z] Copying: 474/1024 [MB] (37 MBps) [2024-11-26T18:08:29.185Z] Copying: 513/1024 [MB] (39 MBps) [2024-11-26T18:08:29.752Z] Copying: 550/1024 [MB] (37 MBps) [2024-11-26T18:08:30.749Z] Copying: 586/1024 [MB] (35 MBps) [2024-11-26T18:08:32.124Z] Copying: 623/1024 [MB] (36 MBps) [2024-11-26T18:08:33.060Z] Copying: 659/1024 [MB] (36 MBps) [2024-11-26T18:08:33.996Z] Copying: 696/1024 [MB] (36 MBps) [2024-11-26T18:08:34.934Z] Copying: 732/1024 [MB] (35 MBps) [2024-11-26T18:08:35.874Z] Copying: 768/1024 [MB] (36 MBps) [2024-11-26T18:08:36.808Z] Copying: 803/1024 [MB] (35 MBps) [2024-11-26T18:08:37.757Z] Copying: 840/1024 [MB] (36 MBps) [2024-11-26T18:08:39.133Z] Copying: 877/1024 [MB] (36 MBps) [2024-11-26T18:08:40.086Z] Copying: 914/1024 [MB] (37 MBps) [2024-11-26T18:08:41.024Z] Copying: 951/1024 [MB] (36 MBps) [2024-11-26T18:08:41.961Z] Copying: 987/1024 [MB] (36 MBps) [2024-11-26T18:08:42.899Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-26 18:08:42.695968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.696041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:25.973 [2024-11-26 18:08:42.696062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:25.973 [2024-11-26 18:08:42.696076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.973 [2024-11-26 18:08:42.696107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:25.973 [2024-11-26 18:08:42.696940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.697011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:25.973 [2024-11-26 18:08:42.697026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:22:25.973 [2024-11-26 18:08:42.697045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.973 [2024-11-26 18:08:42.697298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.697314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:25.973 [2024-11-26 18:08:42.697328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:25.973 [2024-11-26 18:08:42.697342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.973 [2024-11-26 18:08:42.710064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.710131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:25.973 [2024-11-26 18:08:42.710149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.715 ms 00:22:25.973 [2024-11-26 18:08:42.710161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.973 [2024-11-26 18:08:42.716098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.716185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:25.973 [2024-11-26 18:08:42.716200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:22:25.973 [2024-11-26 18:08:42.716212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.973 [2024-11-26 18:08:42.718249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.973 [2024-11-26 18:08:42.718298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:25.973 [2024-11-26 18:08:42.718312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:22:25.974 [2024-11-26 18:08:42.718323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.722492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.722545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:25.974 [2024-11-26 18:08:42.722561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.142 ms 00:22:25.974 [2024-11-26 18:08:42.722589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.725849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.725899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:25.974 [2024-11-26 18:08:42.725915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:22:25.974 [2024-11-26 18:08:42.725927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.728035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.728078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:25.974 [2024-11-26 18:08:42.728092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:22:25.974 [2024-11-26 18:08:42.728102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.729592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.729629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:25.974 [2024-11-26 18:08:42.729642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:22:25.974 [2024-11-26 18:08:42.729652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.730776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.730815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:25.974 [2024-11-26 18:08:42.730827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:22:25.974 [2024-11-26 18:08:42.730837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.732128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.974 [2024-11-26 18:08:42.732168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:25.974 [2024-11-26 18:08:42.732180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:22:25.974 [2024-11-26 18:08:42.732190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.974 [2024-11-26 18:08:42.732217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:25.974 [2024-11-26 18:08:42.732236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:25.974 [2024-11-26 18:08:42.732262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:22:25.974 [2024-11-26 18:08:42.732273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:25.974 [2024-11-26 18:08:42.732999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:25.975 [2024-11-26 18:08:42.733378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:25.975 [2024-11-26 18:08:42.733388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dab23966-cac7-4916-9c8e-e833bd8ea971 00:22:25.975 [2024-11-26 18:08:42.733400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:22:25.975 [2024-11-26 18:08:42.733410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156352 00:22:25.975 [2024-11-26 18:08:42.733432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154368 00:22:25.975 [2024-11-26 18:08:42.733443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:22:25.975 [2024-11-26 18:08:42.733462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:25.975 [2024-11-26 18:08:42.733473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:25.975 [2024-11-26 18:08:42.733483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:25.975 [2024-11-26 18:08:42.733493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:25.975 [2024-11-26 18:08:42.733502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:25.975 [2024-11-26 18:08:42.733512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.975 [2024-11-26 18:08:42.733523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:25.975 [2024-11-26 18:08:42.733533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:22:25.975 [2024-11-26 18:08:42.733543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.735345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.975 [2024-11-26 18:08:42.735382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:25.975 [2024-11-26 18:08:42.735394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:22:25.975 [2024-11-26 18:08:42.735405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.735508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.975 [2024-11-26 18:08:42.735522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:25.975 [2024-11-26 18:08:42.735533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:25.975 [2024-11-26 18:08:42.735551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.742491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.742525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.975 [2024-11-26 18:08:42.742539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.742550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.742606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.742619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.975 [2024-11-26 18:08:42.742629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.742640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.742742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.742756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.975 [2024-11-26 18:08:42.742776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.742787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.742805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.742817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.975 [2024-11-26 18:08:42.742829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.742839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.756665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.756717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.975 [2024-11-26 18:08:42.756731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.756749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.761947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.761987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.975 [2024-11-26 18:08:42.762000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.762087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.762104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.975 [2024-11-26 18:08:42.762114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.762155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.762166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.975 [2024-11-26 18:08:42.762177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.762297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.762311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.975 [2024-11-26 18:08:42.762328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.762380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.762392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.975 [2024-11-26 18:08:42.762403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.975 [2024-11-26 18:08:42.762454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.975 [2024-11-26 18:08:42.762465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.975 [2024-11-26 18:08:42.762493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.975 [2024-11-26 18:08:42.762504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.976 [2024-11-26 18:08:42.762549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.976 [2024-11-26 18:08:42.762561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.976 [2024-11-26 18:08:42.762572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.976 [2024-11-26 18:08:42.762582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.976 [2024-11-26 18:08:42.762706] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.818 ms, result 0 00:22:26.234 00:22:26.234 00:22:26.234 18:08:43 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:28.139 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:28.139 18:08:44 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:28.139 [2024-11-26 18:08:44.856260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:28.139 [2024-11-26 18:08:44.856450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87618 ] 00:22:28.139 [2024-11-26 18:08:45.007788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.139 [2024-11-26 18:08:45.056375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.399 [2024-11-26 18:08:45.161977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.399 [2024-11-26 18:08:45.162084] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.399 [2024-11-26 18:08:45.316776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.399 [2024-11-26 18:08:45.316841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.399 [2024-11-26 18:08:45.316858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:28.399 [2024-11-26 18:08:45.316870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.399 [2024-11-26 18:08:45.316946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.399 [2024-11-26 18:08:45.316961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.399 [2024-11-26 18:08:45.316972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:28.399 [2024-11-26 18:08:45.316983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.399 [2024-11-26 18:08:45.317012] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.399 [2024-11-26 18:08:45.317349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.399 [2024-11-26 18:08:45.317380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.399 [2024-11-26 18:08:45.317396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.399 [2024-11-26 18:08:45.317408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:22:28.399 [2024-11-26 18:08:45.317422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.399 [2024-11-26 18:08:45.319039] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:28.399 [2024-11-26 18:08:45.321651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.399 [2024-11-26 18:08:45.321691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:28.399 [2024-11-26 18:08:45.321732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:22:28.399 [2024-11-26 18:08:45.321748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.399 [2024-11-26 18:08:45.321823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.399 [2024-11-26 18:08:45.321837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:28.399 [2024-11-26 18:08:45.321848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:28.399 [2024-11-26 18:08:45.321859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.328991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.329057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.660 [2024-11-26 18:08:45.329071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.092 ms 00:22:28.660 [2024-11-26 18:08:45.329090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.329192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.329205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.660 [2024-11-26 18:08:45.329217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:28.660 [2024-11-26 18:08:45.329228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.329305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.329318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.660 [2024-11-26 18:08:45.329333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:28.660 [2024-11-26 18:08:45.329347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.329387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.660 [2024-11-26 18:08:45.331262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.331313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.660 [2024-11-26 18:08:45.331326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:22:28.660 [2024-11-26 18:08:45.331337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.331376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.331388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.660 [2024-11-26 18:08:45.331399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:28.660 [2024-11-26 18:08:45.331412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.331439] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:28.660 [2024-11-26 18:08:45.331474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:28.660 [2024-11-26 18:08:45.331527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:28.660 [2024-11-26 18:08:45.331553] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:28.660 [2024-11-26 18:08:45.331630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:28.660 [2024-11-26 18:08:45.331651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.660 [2024-11-26 18:08:45.331669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:28.660 [2024-11-26 18:08:45.331688] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.660 [2024-11-26 18:08:45.331701] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.660 [2024-11-26 18:08:45.331713] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:28.660 [2024-11-26 18:08:45.331724] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.660 [2024-11-26 18:08:45.331734] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:28.660 [2024-11-26 18:08:45.331745] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:28.660 [2024-11-26 18:08:45.331757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.331768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.660 [2024-11-26 18:08:45.331786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:28.660 [2024-11-26 18:08:45.331800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.331861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.660 [2024-11-26 18:08:45.331873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.660 [2024-11-26 18:08:45.331889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:28.660 [2024-11-26 18:08:45.331900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.660 [2024-11-26 18:08:45.331975] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.660 [2024-11-26 18:08:45.331989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.660 [2024-11-26 18:08:45.332000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.660 [2024-11-26 18:08:45.332052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.660 [2024-11-26 18:08:45.332081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.660 [2024-11-26 18:08:45.332101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.660 [2024-11-26 18:08:45.332111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:28.660 [2024-11-26 18:08:45.332120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.660 [2024-11-26 18:08:45.332130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.660 [2024-11-26 18:08:45.332144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:28.660 [2024-11-26 18:08:45.332155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.660 [2024-11-26 18:08:45.332174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:28.660 [2024-11-26 18:08:45.332183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:28.660 [2024-11-26 18:08:45.332202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:28.660 [2024-11-26 18:08:45.332212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.660 [2024-11-26 18:08:45.332231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.660 [2024-11-26 18:08:45.332261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.660 [2024-11-26 18:08:45.332289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.660 [2024-11-26 18:08:45.332324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.660 [2024-11-26 18:08:45.332352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.660 [2024-11-26 18:08:45.332371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.660 [2024-11-26 18:08:45.332380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:28.660 [2024-11-26 18:08:45.332390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.660 [2024-11-26 18:08:45.332399] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.660 [2024-11-26 18:08:45.332413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.660 [2024-11-26 18:08:45.332423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.660 [2024-11-26 18:08:45.332444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.660 [2024-11-26 18:08:45.332454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.660 [2024-11-26 18:08:45.332468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.660 [2024-11-26 18:08:45.332478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.660 [2024-11-26 18:08:45.332499] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.660 [2024-11-26 18:08:45.332509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.660 [2024-11-26 18:08:45.332520] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.660 [2024-11-26 18:08:45.332533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.661 [2024-11-26 18:08:45.332545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:28.661 [2024-11-26 18:08:45.332556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:28.661 [2024-11-26 18:08:45.332566] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:28.661 [2024-11-26 18:08:45.332577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:28.661 [2024-11-26 18:08:45.332588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:28.661 [2024-11-26 18:08:45.332599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:28.661 [2024-11-26 18:08:45.332611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:28.661 [2024-11-26 18:08:45.332621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:28.661 [2024-11-26 18:08:45.332632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:28.661 [2024-11-26 18:08:45.332643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:28.661 [2024-11-26 18:08:45.332657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:28.661 [2024-11-26 18:08:45.332669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:28.661 [2024-11-26 18:08:45.332680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:28.661 [2024-11-26 18:08:45.332691] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.661 [2024-11-26 18:08:45.332703] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.661 [2024-11-26 18:08:45.332715] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.661 [2024-11-26 18:08:45.332726] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.661 [2024-11-26 18:08:45.332737] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.661 [2024-11-26 18:08:45.332749] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.661 [2024-11-26 18:08:45.332760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.332770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.661 [2024-11-26 18:08:45.332781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:22:28.661 [2024-11-26 18:08:45.332810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.342004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.342063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.661 [2024-11-26 18:08:45.342079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.162 ms 00:22:28.661 [2024-11-26 18:08:45.342102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.342229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.342243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.661 [2024-11-26 18:08:45.342255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:28.661 [2024-11-26 18:08:45.342266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.365003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.365081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.661 [2024-11-26 18:08:45.365113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.686 ms 00:22:28.661 [2024-11-26 18:08:45.365139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.365210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.365227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.661 [2024-11-26 18:08:45.365252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:28.661 [2024-11-26 18:08:45.365271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.365830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.365860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.661 [2024-11-26 18:08:45.365876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:22:28.661 [2024-11-26 18:08:45.365902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.366061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.366079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.661 [2024-11-26 18:08:45.366094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:28.661 [2024-11-26 18:08:45.366108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.373974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.374035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.661 [2024-11-26 18:08:45.374059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.836 ms 00:22:28.661 [2024-11-26 18:08:45.374070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.376839] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:28.661 [2024-11-26 18:08:45.376900] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:28.661 [2024-11-26 18:08:45.376919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.376930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:28.661 [2024-11-26 18:08:45.376941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:22:28.661 [2024-11-26 18:08:45.376952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.390867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.390975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:28.661 [2024-11-26 18:08:45.390992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.893 ms 00:22:28.661 [2024-11-26 18:08:45.391003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.394059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.394108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:28.661 [2024-11-26 18:08:45.394123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:22:28.661 [2024-11-26 18:08:45.394141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.395833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.395871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:28.661 [2024-11-26 18:08:45.395883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:22:28.661 [2024-11-26 18:08:45.395893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.396102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.396125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.661 [2024-11-26 18:08:45.396137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:28.661 [2024-11-26 18:08:45.396147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.422053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.422126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:28.661 [2024-11-26 18:08:45.422155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.922 ms 00:22:28.661 [2024-11-26 18:08:45.422174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.430554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:28.661 [2024-11-26 18:08:45.434346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.434393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.661 [2024-11-26 18:08:45.434427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.080 ms 00:22:28.661 [2024-11-26 18:08:45.434446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.434587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.434613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:28.661 [2024-11-26 18:08:45.434625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:28.661 [2024-11-26 18:08:45.434635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.435561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.435593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.661 [2024-11-26 18:08:45.435606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:22:28.661 [2024-11-26 18:08:45.435616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.437831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.437861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:28.661 [2024-11-26 18:08:45.437874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:22:28.661 [2024-11-26 18:08:45.437884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.437940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.437951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.661 [2024-11-26 18:08:45.437962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:28.661 [2024-11-26 18:08:45.437987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.438036] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:28.661 [2024-11-26 18:08:45.438059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.661 [2024-11-26 18:08:45.438077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:28.661 [2024-11-26 18:08:45.438102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:28.661 [2024-11-26 18:08:45.438112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.661 [2024-11-26 18:08:45.441940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.662 [2024-11-26 18:08:45.441982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.662 [2024-11-26 18:08:45.441996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.810 ms 00:22:28.662 [2024-11-26 18:08:45.442014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.662 [2024-11-26 18:08:45.442085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.662 [2024-11-26 18:08:45.442097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.662 [2024-11-26 18:08:45.442117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:28.662 [2024-11-26 18:08:45.442127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.662 [2024-11-26 18:08:45.443300] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.302 ms, result 0 00:22:30.037  [2024-11-26T18:08:47.897Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-26T18:08:48.832Z] Copying: 64/1024 [MB] (31 MBps) [2024-11-26T18:08:49.802Z] Copying: 95/1024 [MB] (30 MBps) [2024-11-26T18:08:50.735Z] Copying: 128/1024 [MB] (33 MBps) [2024-11-26T18:08:51.669Z] Copying: 158/1024 [MB] (30 MBps) [2024-11-26T18:08:53.045Z] Copying: 189/1024 [MB] (30 MBps) [2024-11-26T18:08:54.027Z] Copying: 217/1024 [MB] (28 MBps) [2024-11-26T18:08:54.975Z] Copying: 246/1024 [MB] (28 MBps) [2024-11-26T18:08:55.909Z] Copying: 273/1024 [MB] (27 MBps) [2024-11-26T18:08:56.844Z] Copying: 301/1024 [MB] (27 MBps) [2024-11-26T18:08:57.778Z] Copying: 328/1024 [MB] (27 MBps) [2024-11-26T18:08:58.714Z] Copying: 355/1024 [MB] (26 MBps) [2024-11-26T18:08:59.789Z] Copying: 382/1024 [MB] (27 MBps) [2024-11-26T18:09:00.777Z] Copying: 410/1024 [MB] (28 MBps) [2024-11-26T18:09:01.705Z] Copying: 436/1024 [MB] (25 MBps) [2024-11-26T18:09:03.077Z] Copying: 466/1024 [MB] (30 MBps) [2024-11-26T18:09:03.646Z] Copying: 497/1024 [MB] (30 MBps) [2024-11-26T18:09:05.036Z] Copying: 526/1024 [MB] (28 MBps) [2024-11-26T18:09:05.969Z] Copying: 553/1024 [MB] (27 MBps) [2024-11-26T18:09:06.904Z] Copying: 582/1024 [MB] (28 MBps) [2024-11-26T18:09:07.837Z] Copying: 613/1024 [MB] (31 MBps) [2024-11-26T18:09:08.786Z] Copying: 644/1024 [MB] (30 MBps) [2024-11-26T18:09:09.721Z] Copying: 674/1024 [MB] (30 MBps) [2024-11-26T18:09:10.658Z] Copying: 701/1024 [MB] (26 MBps) [2024-11-26T18:09:12.032Z] Copying: 728/1024 [MB] (27 MBps) [2024-11-26T18:09:12.967Z] Copying: 757/1024 [MB] (29 MBps) [2024-11-26T18:09:13.968Z] Copying: 788/1024 [MB] (30 MBps) [2024-11-26T18:09:14.903Z] Copying: 822/1024 [MB] (34 MBps) [2024-11-26T18:09:15.840Z] Copying: 856/1024 [MB] (33 MBps) [2024-11-26T18:09:16.773Z] Copying: 887/1024 [MB] (31 MBps) [2024-11-26T18:09:17.708Z] Copying: 920/1024 [MB] (33 MBps) [2024-11-26T18:09:18.645Z] Copying: 952/1024 [MB] (31 MBps) [2024-11-26T18:09:20.021Z] Copying: 989/1024 [MB] (37 MBps) [2024-11-26T18:09:20.021Z] Copying: 1023/1024 [MB] (33 MBps) [2024-11-26T18:09:20.021Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-26 18:09:19.767825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.767948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.095 [2024-11-26 18:09:19.767989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:03.095 [2024-11-26 18:09:19.768017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.768074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.095 [2024-11-26 18:09:19.769015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.769061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.095 [2024-11-26 18:09:19.769091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:23:03.095 [2024-11-26 18:09:19.769116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.769648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.769692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.095 [2024-11-26 18:09:19.769721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:23:03.095 [2024-11-26 18:09:19.769747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.776063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.776107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.095 [2024-11-26 18:09:19.776137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:23:03.095 [2024-11-26 18:09:19.776172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.785787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.785869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:03.095 [2024-11-26 18:09:19.785888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:23:03.095 [2024-11-26 18:09:19.785901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.787889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.787933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.095 [2024-11-26 18:09:19.787948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:23:03.095 [2024-11-26 18:09:19.787960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.792063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.792124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.095 [2024-11-26 18:09:19.792140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:23:03.095 [2024-11-26 18:09:19.792152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.095 [2024-11-26 18:09:19.795645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.095 [2024-11-26 18:09:19.795686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:03.096 [2024-11-26 18:09:19.795700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:23:03.096 [2024-11-26 18:09:19.795711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.096 [2024-11-26 18:09:19.797792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.096 [2024-11-26 18:09:19.797829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:03.096 [2024-11-26 18:09:19.797841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:23:03.096 [2024-11-26 18:09:19.797851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.096 [2024-11-26 18:09:19.799215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.096 [2024-11-26 18:09:19.799252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:03.096 [2024-11-26 18:09:19.799264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:23:03.096 [2024-11-26 18:09:19.799274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.096 [2024-11-26 18:09:19.800488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.096 [2024-11-26 18:09:19.800536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:03.096 [2024-11-26 18:09:19.800548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:23:03.096 [2024-11-26 18:09:19.800558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.096 [2024-11-26 18:09:19.801670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.096 [2024-11-26 18:09:19.801843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:03.096 [2024-11-26 18:09:19.801866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:23:03.096 [2024-11-26 18:09:19.801879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.096 [2024-11-26 18:09:19.801910] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:03.096 [2024-11-26 18:09:19.801929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:03.096 [2024-11-26 18:09:19.801945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:23:03.096 [2024-11-26 18:09:19.801958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.801972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.801984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.801997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:03.096 [2024-11-26 18:09:19.802927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.802940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.802951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.802963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.802991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:03.097 [2024-11-26 18:09:19.803283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:03.097 [2024-11-26 18:09:19.803308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dab23966-cac7-4916-9c8e-e833bd8ea971 00:23:03.097 [2024-11-26 18:09:19.803321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:23:03.097 [2024-11-26 18:09:19.803332] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:03.097 [2024-11-26 18:09:19.803343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:03.097 [2024-11-26 18:09:19.803354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:03.097 [2024-11-26 18:09:19.803365] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:03.097 [2024-11-26 18:09:19.803377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:03.097 [2024-11-26 18:09:19.803388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:03.097 [2024-11-26 18:09:19.803398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:03.097 [2024-11-26 18:09:19.803409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:03.097 [2024-11-26 18:09:19.803420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-11-26 18:09:19.803431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:03.097 [2024-11-26 18:09:19.803443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:23:03.097 [2024-11-26 18:09:19.803454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.805284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-11-26 18:09:19.805422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:03.097 [2024-11-26 18:09:19.805444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:23:03.097 [2024-11-26 18:09:19.805474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.805581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.097 [2024-11-26 18:09:19.805601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:03.097 [2024-11-26 18:09:19.805614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:03.097 [2024-11-26 18:09:19.805626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.812690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.812714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:03.097 [2024-11-26 18:09:19.812727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.812738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.812790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.812806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:03.097 [2024-11-26 18:09:19.812817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.812828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.812918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.812933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:03.097 [2024-11-26 18:09:19.812953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.812963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.812982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.812993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:03.097 [2024-11-26 18:09:19.813011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.813021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.826402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.826503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:03.097 [2024-11-26 18:09:19.826547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.826563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.831915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:03.097 [2024-11-26 18:09:19.832161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:03.097 [2024-11-26 18:09:19.832279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:03.097 [2024-11-26 18:09:19.832346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:03.097 [2024-11-26 18:09:19.832489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:03.097 [2024-11-26 18:09:19.832566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.097 [2024-11-26 18:09:19.832650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:03.097 [2024-11-26 18:09:19.832725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.097 [2024-11-26 18:09:19.832736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:03.097 [2024-11-26 18:09:19.832746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.097 [2024-11-26 18:09:19.832886] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.150 ms, result 0 00:23:03.356 00:23:03.356 00:23:03.356 18:09:20 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:05.271 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:23:05.271 18:09:21 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:23:05.271 18:09:21 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:23:05.271 18:09:21 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.271 18:09:21 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.271 18:09:22 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:05.271 18:09:22 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.271 18:09:22 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:05.271 Process with pid 85940 is not found 00:23:05.271 18:09:22 -- ftl/dirty_shutdown.sh@37 -- # killprocess 85940 00:23:05.271 18:09:22 -- common/autotest_common.sh@936 -- # '[' -z 85940 ']' 00:23:05.271 18:09:22 -- common/autotest_common.sh@940 -- # kill -0 85940 00:23:05.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85940) - No such process 00:23:05.271 18:09:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85940 is not found' 00:23:05.271 18:09:22 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:23:05.854 Remove shared memory files 00:23:05.854 18:09:22 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:23:05.854 18:09:22 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:05.854 18:09:22 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:05.854 18:09:22 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:05.854 18:09:22 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:05.854 18:09:22 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:05.854 18:09:22 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:05.854 ************************************ 00:23:05.854 END TEST ftl_dirty_shutdown 00:23:05.854 ************************************ 00:23:05.854 00:23:05.854 real 3m15.144s 00:23:05.854 user 3m39.853s 00:23:05.854 sys 0m36.735s 00:23:05.854 18:09:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:05.854 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.854 18:09:22 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:23:05.854 18:09:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:05.854 18:09:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.854 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.854 ************************************ 00:23:05.854 START TEST ftl_upgrade_shutdown 00:23:05.854 ************************************ 00:23:05.854 18:09:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:23:05.854 * Looking for test storage... 00:23:05.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.854 18:09:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:05.854 18:09:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:05.854 18:09:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:06.113 18:09:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:06.113 18:09:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:06.113 18:09:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:06.113 18:09:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:06.113 18:09:22 -- scripts/common.sh@335 -- # IFS=.-: 00:23:06.113 18:09:22 -- scripts/common.sh@335 -- # read -ra ver1 00:23:06.113 18:09:22 -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.113 18:09:22 -- scripts/common.sh@336 -- # read -ra ver2 00:23:06.113 18:09:22 -- scripts/common.sh@337 -- # local 'op=<' 00:23:06.113 18:09:22 -- scripts/common.sh@339 -- # ver1_l=2 00:23:06.113 18:09:22 -- scripts/common.sh@340 -- # ver2_l=1 00:23:06.113 18:09:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:06.113 18:09:22 -- scripts/common.sh@343 -- # case "$op" in 00:23:06.113 18:09:22 -- scripts/common.sh@344 -- # : 1 00:23:06.113 18:09:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:06.113 18:09:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.113 18:09:22 -- scripts/common.sh@364 -- # decimal 1 00:23:06.113 18:09:22 -- scripts/common.sh@352 -- # local d=1 00:23:06.113 18:09:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.113 18:09:22 -- scripts/common.sh@354 -- # echo 1 00:23:06.113 18:09:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:06.113 18:09:22 -- scripts/common.sh@365 -- # decimal 2 00:23:06.113 18:09:22 -- scripts/common.sh@352 -- # local d=2 00:23:06.113 18:09:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.113 18:09:22 -- scripts/common.sh@354 -- # echo 2 00:23:06.113 18:09:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:06.114 18:09:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:06.114 18:09:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:06.114 18:09:22 -- scripts/common.sh@367 -- # return 0 00:23:06.114 18:09:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.114 18:09:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.114 --rc genhtml_branch_coverage=1 00:23:06.114 --rc genhtml_function_coverage=1 00:23:06.114 --rc genhtml_legend=1 00:23:06.114 --rc geninfo_all_blocks=1 00:23:06.114 --rc geninfo_unexecuted_blocks=1 00:23:06.114 00:23:06.114 ' 00:23:06.114 18:09:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.114 --rc genhtml_branch_coverage=1 00:23:06.114 --rc genhtml_function_coverage=1 00:23:06.114 --rc genhtml_legend=1 00:23:06.114 --rc geninfo_all_blocks=1 00:23:06.114 --rc geninfo_unexecuted_blocks=1 00:23:06.114 00:23:06.114 ' 00:23:06.114 18:09:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.114 --rc genhtml_branch_coverage=1 00:23:06.114 --rc genhtml_function_coverage=1 00:23:06.114 --rc genhtml_legend=1 00:23:06.114 --rc geninfo_all_blocks=1 00:23:06.114 --rc geninfo_unexecuted_blocks=1 00:23:06.114 00:23:06.114 ' 00:23:06.114 18:09:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.114 --rc genhtml_branch_coverage=1 00:23:06.114 --rc genhtml_function_coverage=1 00:23:06.114 --rc genhtml_legend=1 00:23:06.114 --rc geninfo_all_blocks=1 00:23:06.114 --rc geninfo_unexecuted_blocks=1 00:23:06.114 00:23:06.114 ' 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:06.114 18:09:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:23:06.114 18:09:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:06.114 18:09:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:06.114 18:09:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:06.114 18:09:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:06.114 18:09:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.114 18:09:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:06.114 18:09:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:06.114 18:09:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.114 18:09:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.114 18:09:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:06.114 18:09:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:06.114 18:09:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:06.114 18:09:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:06.114 18:09:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:06.114 18:09:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:06.114 18:09:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.114 18:09:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.114 18:09:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:06.114 18:09:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:06.114 18:09:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:06.114 18:09:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:06.114 18:09:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:06.114 18:09:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:06.114 18:09:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:06.114 18:09:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:06.114 18:09:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:06.114 18:09:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:23:06.114 18:09:22 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:23:06.114 18:09:22 -- ftl/common.sh@81 -- # local base_bdev= 00:23:06.114 18:09:22 -- ftl/common.sh@82 -- # local cache_bdev= 00:23:06.114 18:09:22 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:06.114 18:09:22 -- ftl/common.sh@89 -- # spdk_tgt_pid=88075 00:23:06.114 18:09:22 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:23:06.114 18:09:22 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:06.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.114 18:09:22 -- ftl/common.sh@91 -- # waitforlisten 88075 00:23:06.114 18:09:22 -- common/autotest_common.sh@829 -- # '[' -z 88075 ']' 00:23:06.114 18:09:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.114 18:09:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.114 18:09:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.114 18:09:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.114 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.114 [2024-11-26 18:09:22.940202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:06.114 [2024-11-26 18:09:22.940326] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88075 ] 00:23:06.374 [2024-11-26 18:09:23.084541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.374 [2024-11-26 18:09:23.131524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:06.374 [2024-11-26 18:09:23.131965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.963 18:09:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.963 18:09:23 -- common/autotest_common.sh@862 -- # return 0 00:23:06.963 18:09:23 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:06.963 18:09:23 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:06.963 18:09:23 -- ftl/common.sh@99 -- # local params 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:06.963 18:09:23 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:06.963 18:09:23 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:06.963 18:09:23 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:23:06.963 18:09:23 -- ftl/common.sh@54 -- # local name=base 00:23:06.963 18:09:23 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:23:06.963 18:09:23 -- ftl/common.sh@56 -- # local size=20480 00:23:06.963 18:09:23 -- ftl/common.sh@59 -- # local base_bdev 00:23:06.963 18:09:23 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:23:07.222 18:09:24 -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:07.222 18:09:24 -- ftl/common.sh@62 -- # local base_size 00:23:07.222 18:09:24 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:07.222 18:09:24 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:23:07.222 18:09:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:23:07.222 18:09:24 -- common/autotest_common.sh@1369 -- # local bs 00:23:07.222 18:09:24 -- common/autotest_common.sh@1370 -- # local nb 00:23:07.222 18:09:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:23:07.491 18:09:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:23:07.491 { 00:23:07.491 "name": "basen1", 00:23:07.491 "aliases": [ 00:23:07.491 "232f7a9d-0064-4983-9621-86c7ab19ec04" 00:23:07.491 ], 00:23:07.491 "product_name": "NVMe disk", 00:23:07.491 "block_size": 4096, 00:23:07.491 "num_blocks": 1310720, 00:23:07.491 "uuid": "232f7a9d-0064-4983-9621-86c7ab19ec04", 00:23:07.491 "assigned_rate_limits": { 00:23:07.491 "rw_ios_per_sec": 0, 00:23:07.491 "rw_mbytes_per_sec": 0, 00:23:07.491 "r_mbytes_per_sec": 0, 00:23:07.491 "w_mbytes_per_sec": 0 00:23:07.491 }, 00:23:07.491 "claimed": true, 00:23:07.491 "claim_type": "read_many_write_one", 00:23:07.491 "zoned": false, 00:23:07.491 "supported_io_types": { 00:23:07.491 "read": true, 00:23:07.492 "write": true, 00:23:07.492 "unmap": true, 00:23:07.492 "write_zeroes": true, 00:23:07.492 "flush": true, 00:23:07.492 "reset": true, 00:23:07.492 "compare": true, 00:23:07.492 "compare_and_write": false, 00:23:07.492 "abort": true, 00:23:07.492 "nvme_admin": true, 00:23:07.492 "nvme_io": true 00:23:07.492 }, 00:23:07.492 "driver_specific": { 00:23:07.492 "nvme": [ 00:23:07.492 { 00:23:07.492 "pci_address": "0000:00:07.0", 00:23:07.492 "trid": { 00:23:07.492 "trtype": "PCIe", 00:23:07.492 "traddr": "0000:00:07.0" 00:23:07.492 }, 00:23:07.492 "ctrlr_data": { 00:23:07.492 "cntlid": 0, 00:23:07.492 "vendor_id": "0x1b36", 00:23:07.492 "model_number": "QEMU NVMe Ctrl", 00:23:07.492 "serial_number": "12341", 00:23:07.492 "firmware_revision": "8.0.0", 00:23:07.492 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:07.492 "oacs": { 00:23:07.492 "security": 0, 00:23:07.492 "format": 1, 00:23:07.492 "firmware": 0, 00:23:07.492 "ns_manage": 1 00:23:07.492 }, 00:23:07.492 "multi_ctrlr": false, 00:23:07.492 "ana_reporting": false 00:23:07.492 }, 00:23:07.492 "vs": { 00:23:07.492 "nvme_version": "1.4" 00:23:07.492 }, 00:23:07.492 "ns_data": { 00:23:07.492 "id": 1, 00:23:07.492 "can_share": false 00:23:07.492 } 00:23:07.492 } 00:23:07.492 ], 00:23:07.492 "mp_policy": "active_passive" 00:23:07.492 } 00:23:07.492 } 00:23:07.492 ]' 00:23:07.492 18:09:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:23:07.492 18:09:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:23:07.492 18:09:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:23:07.492 18:09:24 -- common/autotest_common.sh@1373 -- # nb=1310720 00:23:07.492 18:09:24 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:23:07.492 18:09:24 -- common/autotest_common.sh@1377 -- # echo 5120 00:23:07.492 18:09:24 -- ftl/common.sh@63 -- # base_size=5120 00:23:07.492 18:09:24 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:23:07.492 18:09:24 -- ftl/common.sh@67 -- # clear_lvols 00:23:07.492 18:09:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:07.492 18:09:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:08.058 18:09:24 -- ftl/common.sh@28 -- # stores=f1bb7d02-aedd-44cb-ba29-289bb8adb305 00:23:08.058 18:09:24 -- ftl/common.sh@29 -- # for lvs in $stores 00:23:08.058 18:09:24 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1bb7d02-aedd-44cb-ba29-289bb8adb305 00:23:08.058 18:09:24 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:23:08.317 18:09:25 -- ftl/common.sh@68 -- # lvs=2e5ee114-336d-46c1-b10e-ab334fb49274 00:23:08.317 18:09:25 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2e5ee114-336d-46c1-b10e-ab334fb49274 00:23:08.575 18:09:25 -- ftl/common.sh@107 -- # base_bdev=35832740-3899-4248-8def-8d9259a5959f 00:23:08.575 18:09:25 -- ftl/common.sh@108 -- # [[ -z 35832740-3899-4248-8def-8d9259a5959f ]] 00:23:08.575 18:09:25 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 35832740-3899-4248-8def-8d9259a5959f 5120 00:23:08.575 18:09:25 -- ftl/common.sh@35 -- # local name=cache 00:23:08.575 18:09:25 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:23:08.575 18:09:25 -- ftl/common.sh@37 -- # local base_bdev=35832740-3899-4248-8def-8d9259a5959f 00:23:08.576 18:09:25 -- ftl/common.sh@38 -- # local cache_size=5120 00:23:08.576 18:09:25 -- ftl/common.sh@41 -- # get_bdev_size 35832740-3899-4248-8def-8d9259a5959f 00:23:08.576 18:09:25 -- common/autotest_common.sh@1367 -- # local bdev_name=35832740-3899-4248-8def-8d9259a5959f 00:23:08.576 18:09:25 -- common/autotest_common.sh@1368 -- # local bdev_info 00:23:08.576 18:09:25 -- common/autotest_common.sh@1369 -- # local bs 00:23:08.576 18:09:25 -- common/autotest_common.sh@1370 -- # local nb 00:23:08.576 18:09:25 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35832740-3899-4248-8def-8d9259a5959f 00:23:08.834 18:09:25 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:23:08.834 { 00:23:08.834 "name": "35832740-3899-4248-8def-8d9259a5959f", 00:23:08.834 "aliases": [ 00:23:08.834 "lvs/basen1p0" 00:23:08.834 ], 00:23:08.834 "product_name": "Logical Volume", 00:23:08.834 "block_size": 4096, 00:23:08.834 "num_blocks": 5242880, 00:23:08.834 "uuid": "35832740-3899-4248-8def-8d9259a5959f", 00:23:08.834 "assigned_rate_limits": { 00:23:08.834 "rw_ios_per_sec": 0, 00:23:08.834 "rw_mbytes_per_sec": 0, 00:23:08.834 "r_mbytes_per_sec": 0, 00:23:08.834 "w_mbytes_per_sec": 0 00:23:08.834 }, 00:23:08.834 "claimed": false, 00:23:08.834 "zoned": false, 00:23:08.834 "supported_io_types": { 00:23:08.834 "read": true, 00:23:08.834 "write": true, 00:23:08.834 "unmap": true, 00:23:08.834 "write_zeroes": true, 00:23:08.834 "flush": false, 00:23:08.834 "reset": true, 00:23:08.834 "compare": false, 00:23:08.834 "compare_and_write": false, 00:23:08.834 "abort": false, 00:23:08.834 "nvme_admin": false, 00:23:08.834 "nvme_io": false 00:23:08.834 }, 00:23:08.834 "driver_specific": { 00:23:08.834 "lvol": { 00:23:08.834 "lvol_store_uuid": "2e5ee114-336d-46c1-b10e-ab334fb49274", 00:23:08.834 "base_bdev": "basen1", 00:23:08.834 "thin_provision": true, 00:23:08.834 "snapshot": false, 00:23:08.834 "clone": false, 00:23:08.834 "esnap_clone": false 00:23:08.834 } 00:23:08.834 } 00:23:08.834 } 00:23:08.834 ]' 00:23:08.834 18:09:25 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:23:08.834 18:09:25 -- common/autotest_common.sh@1372 -- # bs=4096 00:23:08.834 18:09:25 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:23:08.834 18:09:25 -- common/autotest_common.sh@1373 -- # nb=5242880 00:23:08.834 18:09:25 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:23:08.834 18:09:25 -- common/autotest_common.sh@1377 -- # echo 20480 00:23:08.834 18:09:25 -- ftl/common.sh@41 -- # local base_size=1024 00:23:08.834 18:09:25 -- ftl/common.sh@44 -- # local nvc_bdev 00:23:08.834 18:09:25 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:23:09.094 18:09:25 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:23:09.094 18:09:25 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:23:09.094 18:09:25 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:23:09.357 18:09:26 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:23:09.357 18:09:26 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:23:09.357 18:09:26 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 35832740-3899-4248-8def-8d9259a5959f -c cachen1p0 --l2p_dram_limit 2 00:23:09.623 [2024-11-26 18:09:26.443362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.623 [2024-11-26 18:09:26.443646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:09.623 [2024-11-26 18:09:26.443692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:09.623 [2024-11-26 18:09:26.443705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.623 [2024-11-26 18:09:26.443797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.623 [2024-11-26 18:09:26.443811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:09.623 [2024-11-26 18:09:26.443829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:23:09.623 [2024-11-26 18:09:26.443841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.623 [2024-11-26 18:09:26.443883] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:09.623 [2024-11-26 18:09:26.444174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:09.623 [2024-11-26 18:09:26.444198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.623 [2024-11-26 18:09:26.444217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:09.623 [2024-11-26 18:09:26.444233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:23:09.623 [2024-11-26 18:09:26.444244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.623 [2024-11-26 18:09:26.444326] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6cc17920-e1e5-4391-b94b-a54981d27a0e 00:23:09.623 [2024-11-26 18:09:26.445797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.623 [2024-11-26 18:09:26.445826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:23:09.623 [2024-11-26 18:09:26.445839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:09.623 [2024-11-26 18:09:26.445853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.623 [2024-11-26 18:09:26.453514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.623 [2024-11-26 18:09:26.453575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:09.623 [2024-11-26 18:09:26.453591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.621 ms 00:23:09.623 [2024-11-26 18:09:26.453625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.623 [2024-11-26 18:09:26.453684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.453707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:09.624 [2024-11-26 18:09:26.453719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:23:09.624 [2024-11-26 18:09:26.453733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.453817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.453833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:09.624 [2024-11-26 18:09:26.453852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:09.624 [2024-11-26 18:09:26.453870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.453902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:09.624 [2024-11-26 18:09:26.455841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.455868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:09.624 [2024-11-26 18:09:26.455884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.949 ms 00:23:09.624 [2024-11-26 18:09:26.455894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.455934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.455946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:09.624 [2024-11-26 18:09:26.455967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:09.624 [2024-11-26 18:09:26.455978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.456003] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:23:09.624 [2024-11-26 18:09:26.456120] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:23:09.624 [2024-11-26 18:09:26.456152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:09.624 [2024-11-26 18:09:26.456166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:23:09.624 [2024-11-26 18:09:26.456186] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456198] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456215] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:09.624 [2024-11-26 18:09:26.456225] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:09.624 [2024-11-26 18:09:26.456238] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:23:09.624 [2024-11-26 18:09:26.456248] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:23:09.624 [2024-11-26 18:09:26.456262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.456273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:09.624 [2024-11-26 18:09:26.456286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:23:09.624 [2024-11-26 18:09:26.456297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.456367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.624 [2024-11-26 18:09:26.456379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:09.624 [2024-11-26 18:09:26.456392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:23:09.624 [2024-11-26 18:09:26.456402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.624 [2024-11-26 18:09:26.456487] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:09.624 [2024-11-26 18:09:26.456500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:09.624 [2024-11-26 18:09:26.456514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:09.624 [2024-11-26 18:09:26.456548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:09.624 [2024-11-26 18:09:26.456571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:09.624 [2024-11-26 18:09:26.456583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:09.624 [2024-11-26 18:09:26.456594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:09.624 [2024-11-26 18:09:26.456618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:09.624 [2024-11-26 18:09:26.456635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:09.624 [2024-11-26 18:09:26.456658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:09.624 [2024-11-26 18:09:26.456690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:23:09.624 [2024-11-26 18:09:26.456702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456712] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:23:09.624 [2024-11-26 18:09:26.456724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:23:09.624 [2024-11-26 18:09:26.456734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456747] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:09.624 [2024-11-26 18:09:26.456757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:09.624 [2024-11-26 18:09:26.456769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456779] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:09.624 [2024-11-26 18:09:26.456791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:23:09.624 [2024-11-26 18:09:26.456801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:09.624 [2024-11-26 18:09:26.456825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:09.624 [2024-11-26 18:09:26.456836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:09.624 [2024-11-26 18:09:26.456858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:23:09.624 [2024-11-26 18:09:26.456868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:09.624 [2024-11-26 18:09:26.456889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:09.624 [2024-11-26 18:09:26.456901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:09.624 [2024-11-26 18:09:26.456924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456945] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:09.624 [2024-11-26 18:09:26.456957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:09.624 [2024-11-26 18:09:26.456973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:09.624 [2024-11-26 18:09:26.456983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:09.624 [2024-11-26 18:09:26.456999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:09.624 [2024-11-26 18:09:26.457009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:09.624 [2024-11-26 18:09:26.457021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:09.624 [2024-11-26 18:09:26.457031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:09.624 [2024-11-26 18:09:26.457044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:09.624 [2024-11-26 18:09:26.457054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:09.624 [2024-11-26 18:09:26.457068] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:09.624 [2024-11-26 18:09:26.457094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:09.624 [2024-11-26 18:09:26.457121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:23:09.624 [2024-11-26 18:09:26.457160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:23:09.624 [2024-11-26 18:09:26.457171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:23:09.624 [2024-11-26 18:09:26.457185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:23:09.624 [2024-11-26 18:09:26.457196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457213] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:23:09.624 [2024-11-26 18:09:26.457249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:23:09.624 [2024-11-26 18:09:26.457263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:23:09.624 [2024-11-26 18:09:26.457273] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:09.625 [2024-11-26 18:09:26.457287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.625 [2024-11-26 18:09:26.457299] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.625 [2024-11-26 18:09:26.457312] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:09.625 [2024-11-26 18:09:26.457323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:09.625 [2024-11-26 18:09:26.457336] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:09.625 [2024-11-26 18:09:26.457348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.457361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:09.625 [2024-11-26 18:09:26.457373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.913 ms 00:23:09.625 [2024-11-26 18:09:26.457386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.466186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.466439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:09.625 [2024-11-26 18:09:26.466551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.761 ms 00:23:09.625 [2024-11-26 18:09:26.466597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.466686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.466737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:09.625 [2024-11-26 18:09:26.466870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:09.625 [2024-11-26 18:09:26.466914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.479361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.479701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:09.625 [2024-11-26 18:09:26.479799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.362 ms 00:23:09.625 [2024-11-26 18:09:26.479850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.479928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.479977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:09.625 [2024-11-26 18:09:26.480106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:09.625 [2024-11-26 18:09:26.480175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.480723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.480857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:09.625 [2024-11-26 18:09:26.480947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.449 ms 00:23:09.625 [2024-11-26 18:09:26.480991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.481116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.481168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:09.625 [2024-11-26 18:09:26.481247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:09.625 [2024-11-26 18:09:26.481336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.488820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.489054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:09.625 [2024-11-26 18:09:26.489229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.436 ms 00:23:09.625 [2024-11-26 18:09:26.489273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.499275] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:09.625 [2024-11-26 18:09:26.500615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.500760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:09.625 [2024-11-26 18:09:26.500857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.222 ms 00:23:09.625 [2024-11-26 18:09:26.500895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.517057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.625 [2024-11-26 18:09:26.517300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:23:09.625 [2024-11-26 18:09:26.517413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.101 ms 00:23:09.625 [2024-11-26 18:09:26.517431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.625 [2024-11-26 18:09:26.517528] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:23:09.625 [2024-11-26 18:09:26.517547] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:23:12.965 [2024-11-26 18:09:29.627802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.627880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:12.965 [2024-11-26 18:09:29.627902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3115.312 ms 00:23:12.965 [2024-11-26 18:09:29.627913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.628030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.628043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:12.965 [2024-11-26 18:09:29.628057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:23:12.965 [2024-11-26 18:09:29.628068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.631322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.631578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:23:12.965 [2024-11-26 18:09:29.631615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.204 ms 00:23:12.965 [2024-11-26 18:09:29.631627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.634754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.634945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:23:12.965 [2024-11-26 18:09:29.634975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.077 ms 00:23:12.965 [2024-11-26 18:09:29.634986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.635168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.635181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:12.965 [2024-11-26 18:09:29.635196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:23:12.965 [2024-11-26 18:09:29.635207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.664700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.664964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:23:12.965 [2024-11-26 18:09:29.665015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.507 ms 00:23:12.965 [2024-11-26 18:09:29.665027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.670677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.670740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:23:12.965 [2024-11-26 18:09:29.670763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.594 ms 00:23:12.965 [2024-11-26 18:09:29.670775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.673050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.673091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:23:12.965 [2024-11-26 18:09:29.673106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.227 ms 00:23:12.965 [2024-11-26 18:09:29.673117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.677522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.677568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:12.965 [2024-11-26 18:09:29.677586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.378 ms 00:23:12.965 [2024-11-26 18:09:29.677597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.677645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.677658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:12.965 [2024-11-26 18:09:29.677676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:12.965 [2024-11-26 18:09:29.677687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.677784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:12.965 [2024-11-26 18:09:29.677797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:12.965 [2024-11-26 18:09:29.677815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:23:12.965 [2024-11-26 18:09:29.677825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:12.965 [2024-11-26 18:09:29.678957] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3240.387 ms, result 0 00:23:12.965 { 00:23:12.965 "name": "ftl", 00:23:12.965 "uuid": "6cc17920-e1e5-4391-b94b-a54981d27a0e" 00:23:12.965 } 00:23:12.965 18:09:29 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:23:13.223 [2024-11-26 18:09:29.909809] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.223 18:09:29 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:23:13.223 18:09:30 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:23:13.481 [2024-11-26 18:09:30.349528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:23:13.481 18:09:30 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:23:13.740 [2024-11-26 18:09:30.609614] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:13.740 18:09:30 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:14.307 Fill FTL, iteration 1 00:23:14.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:23:14.307 18:09:30 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:14.307 18:09:30 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:14.307 18:09:30 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:14.307 18:09:30 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:14.307 18:09:30 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:23:14.307 18:09:30 -- ftl/common.sh@163 -- # spdk_ini_pid=88196 00:23:14.307 18:09:30 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:23:14.307 18:09:30 -- ftl/common.sh@165 -- # waitforlisten 88196 /var/tmp/spdk.tgt.sock 00:23:14.307 18:09:30 -- common/autotest_common.sh@829 -- # '[' -z 88196 ']' 00:23:14.307 18:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:23:14.307 18:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.307 18:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:23:14.307 18:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.307 18:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:14.307 18:09:30 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:23:14.307 [2024-11-26 18:09:31.070248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:14.307 [2024-11-26 18:09:31.070612] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88196 ] 00:23:14.307 [2024-11-26 18:09:31.215030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.564 [2024-11-26 18:09:31.258295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.564 [2024-11-26 18:09:31.258704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.128 18:09:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.128 18:09:31 -- common/autotest_common.sh@862 -- # return 0 00:23:15.128 18:09:31 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:23:15.692 ftln1 00:23:15.692 18:09:32 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:23:15.692 18:09:32 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:23:15.947 18:09:32 -- ftl/common.sh@173 -- # echo ']}' 00:23:15.947 18:09:32 -- ftl/common.sh@176 -- # killprocess 88196 00:23:15.947 18:09:32 -- common/autotest_common.sh@936 -- # '[' -z 88196 ']' 00:23:15.948 18:09:32 -- common/autotest_common.sh@940 -- # kill -0 88196 00:23:15.948 18:09:32 -- common/autotest_common.sh@941 -- # uname 00:23:15.948 18:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.948 18:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88196 00:23:15.948 18:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:15.948 killing process with pid 88196 00:23:15.948 18:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:15.948 18:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88196' 00:23:15.948 18:09:32 -- common/autotest_common.sh@955 -- # kill 88196 00:23:15.948 18:09:32 -- common/autotest_common.sh@960 -- # wait 88196 00:23:16.521 18:09:33 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:23:16.521 18:09:33 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:23:16.521 [2024-11-26 18:09:33.221400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:16.521 [2024-11-26 18:09:33.221599] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88230 ] 00:23:16.521 [2024-11-26 18:09:33.367726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.521 [2024-11-26 18:09:33.417614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.891  [2024-11-26T18:09:35.752Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-26T18:09:36.685Z] Copying: 434/1024 [MB] (222 MBps) [2024-11-26T18:09:37.622Z] Copying: 660/1024 [MB] (226 MBps) [2024-11-26T18:09:38.555Z] Copying: 863/1024 [MB] (203 MBps) [2024-11-26T18:09:38.814Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:23:21.888 00:23:21.888 Calculate MD5 checksum, iteration 1 00:23:21.888 18:09:38 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:23:21.888 18:09:38 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:23:21.888 18:09:38 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:21.888 18:09:38 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:21.888 18:09:38 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:21.888 18:09:38 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:21.888 18:09:38 -- ftl/common.sh@154 -- # return 0 00:23:21.888 18:09:38 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:21.888 [2024-11-26 18:09:38.702652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:21.888 [2024-11-26 18:09:38.703026] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88294 ] 00:23:22.146 [2024-11-26 18:09:38.842406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.146 [2024-11-26 18:09:38.897939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.524  [2024-11-26T18:09:41.068Z] Copying: 624/1024 [MB] (624 MBps) [2024-11-26T18:09:41.068Z] Copying: 1024/1024 [MB] (average 615 MBps) 00:23:24.142 00:23:24.142 18:09:41 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:23:24.142 18:09:41 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:26.044 18:09:42 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:26.044 Fill FTL, iteration 2 00:23:26.044 18:09:42 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e3cde7d6e9c5ee559a84116856bf65f8 00:23:26.044 18:09:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:26.045 18:09:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:26.045 18:09:42 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:23:26.045 18:09:42 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:26.045 18:09:42 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:26.045 18:09:42 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:26.045 18:09:42 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:26.045 18:09:42 -- ftl/common.sh@154 -- # return 0 00:23:26.045 18:09:42 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:23:26.045 [2024-11-26 18:09:42.800405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:26.045 [2024-11-26 18:09:42.800776] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88340 ] 00:23:26.045 [2024-11-26 18:09:42.950940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.304 [2024-11-26 18:09:42.996199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.678  [2024-11-26T18:09:45.540Z] Copying: 248/1024 [MB] (248 MBps) [2024-11-26T18:09:46.474Z] Copying: 492/1024 [MB] (244 MBps) [2024-11-26T18:09:47.407Z] Copying: 735/1024 [MB] (243 MBps) [2024-11-26T18:09:47.407Z] Copying: 976/1024 [MB] (241 MBps) [2024-11-26T18:09:47.666Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:23:30.740 00:23:30.740 18:09:47 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:23:30.740 18:09:47 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:23:30.740 Calculate MD5 checksum, iteration 2 00:23:30.740 18:09:47 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:30.740 18:09:47 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:30.740 18:09:47 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:30.740 18:09:47 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:30.740 18:09:47 -- ftl/common.sh@154 -- # return 0 00:23:30.740 18:09:47 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:30.997 [2024-11-26 18:09:47.758361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:30.997 [2024-11-26 18:09:47.758701] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88387 ] 00:23:30.997 [2024-11-26 18:09:47.914941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.254 [2024-11-26 18:09:47.958578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.684  [2024-11-26T18:09:50.176Z] Copying: 656/1024 [MB] (656 MBps) [2024-11-26T18:09:50.741Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:23:33.815 00:23:33.815 18:09:50 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:23:33.815 18:09:50 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:35.723 18:09:52 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:23:35.723 18:09:52 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=52d16acf1b31a079fb5318f09023e9cc 00:23:35.723 18:09:52 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:23:35.723 18:09:52 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:23:35.723 18:09:52 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:35.990 [2024-11-26 18:09:52.657194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.990 [2024-11-26 18:09:52.657253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:35.990 [2024-11-26 18:09:52.657270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:35.990 [2024-11-26 18:09:52.657281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.990 [2024-11-26 18:09:52.657322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.990 [2024-11-26 18:09:52.657333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:35.990 [2024-11-26 18:09:52.657351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:35.990 [2024-11-26 18:09:52.657364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.990 [2024-11-26 18:09:52.657386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.990 [2024-11-26 18:09:52.657397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:35.990 [2024-11-26 18:09:52.657415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:35.990 [2024-11-26 18:09:52.657425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.990 [2024-11-26 18:09:52.657515] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.303 ms, result 0 00:23:35.990 true 00:23:35.990 18:09:52 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:35.990 { 00:23:35.990 "name": "ftl", 00:23:35.990 "properties": [ 00:23:35.990 { 00:23:35.990 "name": "superblock_version", 00:23:35.990 "value": 5, 00:23:35.990 "read-only": true 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "name": "base_device", 00:23:35.990 "bands": [ 00:23:35.990 { 00:23:35.990 "id": 0, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 1, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 2, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 3, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 4, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 5, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 6, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 7, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 8, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 9, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 10, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 11, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 12, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 13, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 14, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 15, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.990 "id": 16, 00:23:35.990 "state": "FREE", 00:23:35.990 "validity": 0.0 00:23:35.990 }, 00:23:35.990 { 00:23:35.991 "id": 17, 00:23:35.991 "state": "FREE", 00:23:35.991 "validity": 0.0 00:23:35.991 } 00:23:35.991 ], 00:23:35.991 "read-only": true 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "name": "cache_device", 00:23:35.991 "type": "bdev", 00:23:35.991 "chunks": [ 00:23:35.991 { 00:23:35.991 "id": 0, 00:23:35.991 "state": "CLOSED", 00:23:35.991 "utilization": 1.0 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "id": 1, 00:23:35.991 "state": "CLOSED", 00:23:35.991 "utilization": 1.0 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "id": 2, 00:23:35.991 "state": "OPEN", 00:23:35.991 "utilization": 0.001953125 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "id": 3, 00:23:35.991 "state": "OPEN", 00:23:35.991 "utilization": 0.0 00:23:35.991 } 00:23:35.991 ], 00:23:35.991 "read-only": true 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "name": "verbose_mode", 00:23:35.991 "value": true, 00:23:35.991 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:35.991 }, 00:23:35.991 { 00:23:35.991 "name": "prep_upgrade_on_shutdown", 00:23:35.991 "value": false, 00:23:35.991 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:35.991 } 00:23:35.991 ] 00:23:35.991 } 00:23:35.991 18:09:52 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:23:36.250 [2024-11-26 18:09:53.060862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.250 [2024-11-26 18:09:53.060921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:36.250 [2024-11-26 18:09:53.060938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:36.250 [2024-11-26 18:09:53.060948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.250 [2024-11-26 18:09:53.060978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.250 [2024-11-26 18:09:53.060989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:36.250 [2024-11-26 18:09:53.061000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:36.250 [2024-11-26 18:09:53.061009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.250 [2024-11-26 18:09:53.061030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.250 [2024-11-26 18:09:53.061040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:36.250 [2024-11-26 18:09:53.061050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:36.250 [2024-11-26 18:09:53.061060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.250 [2024-11-26 18:09:53.061118] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.249 ms, result 0 00:23:36.250 true 00:23:36.250 18:09:53 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:23:36.250 18:09:53 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:36.250 18:09:53 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:36.509 18:09:53 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:23:36.509 18:09:53 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:23:36.509 18:09:53 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:36.768 [2024-11-26 18:09:53.456671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.768 [2024-11-26 18:09:53.456739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:36.768 [2024-11-26 18:09:53.456756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:36.768 [2024-11-26 18:09:53.456766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.768 [2024-11-26 18:09:53.456794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.768 [2024-11-26 18:09:53.456804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:36.768 [2024-11-26 18:09:53.456815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:36.768 [2024-11-26 18:09:53.456824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.768 [2024-11-26 18:09:53.456845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.768 [2024-11-26 18:09:53.456866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:36.768 [2024-11-26 18:09:53.456876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:36.768 [2024-11-26 18:09:53.456886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.768 [2024-11-26 18:09:53.456945] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.266 ms, result 0 00:23:36.768 true 00:23:36.768 18:09:53 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:36.768 { 00:23:36.768 "name": "ftl", 00:23:36.768 "properties": [ 00:23:36.768 { 00:23:36.768 "name": "superblock_version", 00:23:36.768 "value": 5, 00:23:36.768 "read-only": true 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "name": "base_device", 00:23:36.768 "bands": [ 00:23:36.768 { 00:23:36.768 "id": 0, 00:23:36.768 "state": "FREE", 00:23:36.768 "validity": 0.0 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "id": 1, 00:23:36.768 "state": "FREE", 00:23:36.768 "validity": 0.0 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "id": 2, 00:23:36.768 "state": "FREE", 00:23:36.768 "validity": 0.0 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "id": 3, 00:23:36.768 "state": "FREE", 00:23:36.768 "validity": 0.0 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "id": 4, 00:23:36.768 "state": "FREE", 00:23:36.768 "validity": 0.0 00:23:36.768 }, 00:23:36.768 { 00:23:36.768 "id": 5, 00:23:36.768 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 6, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 7, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 8, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 9, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 10, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 11, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 12, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 13, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 14, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 15, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 16, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 17, 00:23:36.769 "state": "FREE", 00:23:36.769 "validity": 0.0 00:23:36.769 } 00:23:36.769 ], 00:23:36.769 "read-only": true 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "name": "cache_device", 00:23:36.769 "type": "bdev", 00:23:36.769 "chunks": [ 00:23:36.769 { 00:23:36.769 "id": 0, 00:23:36.769 "state": "CLOSED", 00:23:36.769 "utilization": 1.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 1, 00:23:36.769 "state": "CLOSED", 00:23:36.769 "utilization": 1.0 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 2, 00:23:36.769 "state": "OPEN", 00:23:36.769 "utilization": 0.001953125 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "id": 3, 00:23:36.769 "state": "OPEN", 00:23:36.769 "utilization": 0.0 00:23:36.769 } 00:23:36.769 ], 00:23:36.769 "read-only": true 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "name": "verbose_mode", 00:23:36.769 "value": true, 00:23:36.769 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:36.769 }, 00:23:36.769 { 00:23:36.769 "name": "prep_upgrade_on_shutdown", 00:23:36.769 "value": true, 00:23:36.769 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:36.769 } 00:23:36.769 ] 00:23:36.769 } 00:23:36.769 18:09:53 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:23:36.769 18:09:53 -- ftl/common.sh@130 -- # [[ -n 88075 ]] 00:23:36.769 18:09:53 -- ftl/common.sh@131 -- # killprocess 88075 00:23:36.769 18:09:53 -- common/autotest_common.sh@936 -- # '[' -z 88075 ']' 00:23:36.769 18:09:53 -- common/autotest_common.sh@940 -- # kill -0 88075 00:23:36.769 18:09:53 -- common/autotest_common.sh@941 -- # uname 00:23:36.769 18:09:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.769 18:09:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88075 00:23:37.027 killing process with pid 88075 00:23:37.027 18:09:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.027 18:09:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.027 18:09:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88075' 00:23:37.027 18:09:53 -- common/autotest_common.sh@955 -- # kill 88075 00:23:37.027 18:09:53 -- common/autotest_common.sh@960 -- # wait 88075 00:23:37.027 [2024-11-26 18:09:53.842728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:23:37.027 [2024-11-26 18:09:53.846898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:37.027 [2024-11-26 18:09:53.846944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:37.027 [2024-11-26 18:09:53.846959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:37.027 [2024-11-26 18:09:53.846975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:37.027 [2024-11-26 18:09:53.847001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:37.027 [2024-11-26 18:09:53.847661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:37.027 [2024-11-26 18:09:53.847685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:37.027 [2024-11-26 18:09:53.847696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:23:37.027 [2024-11-26 18:09:53.847706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.156874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.156947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:45.139 [2024-11-26 18:10:01.156965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7321.005 ms 00:23:45.139 [2024-11-26 18:10:01.156976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.158019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.158049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:45.139 [2024-11-26 18:10:01.158061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.016 ms 00:23:45.139 [2024-11-26 18:10:01.158078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.159014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.159039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:23:45.139 [2024-11-26 18:10:01.159051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.906 ms 00:23:45.139 [2024-11-26 18:10:01.159061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.160770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.160808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:45.139 [2024-11-26 18:10:01.160820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.663 ms 00:23:45.139 [2024-11-26 18:10:01.160830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.163321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.163372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:45.139 [2024-11-26 18:10:01.163385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.464 ms 00:23:45.139 [2024-11-26 18:10:01.163395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.163475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.163488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:45.139 [2024-11-26 18:10:01.163499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:23:45.139 [2024-11-26 18:10:01.163509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.164698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.164732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:23:45.139 [2024-11-26 18:10:01.164744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:23:45.139 [2024-11-26 18:10:01.164754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.165905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.165943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:23:45.139 [2024-11-26 18:10:01.165954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:23:45.139 [2024-11-26 18:10:01.165964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.167264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.167302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:45.139 [2024-11-26 18:10:01.167314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.272 ms 00:23:45.139 [2024-11-26 18:10:01.167324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.168318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.139 [2024-11-26 18:10:01.168354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:45.139 [2024-11-26 18:10:01.168366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.936 ms 00:23:45.139 [2024-11-26 18:10:01.168376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.139 [2024-11-26 18:10:01.168402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:45.139 [2024-11-26 18:10:01.168418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:45.139 [2024-11-26 18:10:01.168432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:45.139 [2024-11-26 18:10:01.168443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:45.139 [2024-11-26 18:10:01.168470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:45.139 [2024-11-26 18:10:01.168648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:45.140 [2024-11-26 18:10:01.168660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:45.140 [2024-11-26 18:10:01.168676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6cc17920-e1e5-4391-b94b-a54981d27a0e 00:23:45.140 [2024-11-26 18:10:01.168687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:45.140 [2024-11-26 18:10:01.168697] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:23:45.140 [2024-11-26 18:10:01.168706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:23:45.140 [2024-11-26 18:10:01.168716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:23:45.140 [2024-11-26 18:10:01.168726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:45.140 [2024-11-26 18:10:01.168736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:45.140 [2024-11-26 18:10:01.168745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:45.140 [2024-11-26 18:10:01.168754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:45.140 [2024-11-26 18:10:01.168763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:45.140 [2024-11-26 18:10:01.168772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.140 [2024-11-26 18:10:01.168782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:45.140 [2024-11-26 18:10:01.168792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:23:45.140 [2024-11-26 18:10:01.168803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.170630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.140 [2024-11-26 18:10:01.170656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:45.140 [2024-11-26 18:10:01.170668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.809 ms 00:23:45.140 [2024-11-26 18:10:01.170678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.170746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:45.140 [2024-11-26 18:10:01.170757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:45.140 [2024-11-26 18:10:01.170774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:23:45.140 [2024-11-26 18:10:01.170783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.177846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.177881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:45.140 [2024-11-26 18:10:01.177904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.177914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.177946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.177957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:45.140 [2024-11-26 18:10:01.177971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.177981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.178054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.178067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:45.140 [2024-11-26 18:10:01.178078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.178088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.178106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.178117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:45.140 [2024-11-26 18:10:01.178127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.178136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.192507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.192559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:45.140 [2024-11-26 18:10:01.192573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.192584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:45.140 [2024-11-26 18:10:01.197402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:45.140 [2024-11-26 18:10:01.197551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:45.140 [2024-11-26 18:10:01.197615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:45.140 [2024-11-26 18:10:01.197730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:45.140 [2024-11-26 18:10:01.197797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:45.140 [2024-11-26 18:10:01.197871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.197925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:45.140 [2024-11-26 18:10:01.197936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:45.140 [2024-11-26 18:10:01.197964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:45.140 [2024-11-26 18:10:01.197974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:45.140 [2024-11-26 18:10:01.198108] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7363.103 ms, result 0 00:23:46.516 18:10:03 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:46.516 18:10:03 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:23:46.516 18:10:03 -- ftl/common.sh@81 -- # local base_bdev= 00:23:46.516 18:10:03 -- ftl/common.sh@82 -- # local cache_bdev= 00:23:46.516 18:10:03 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:46.516 18:10:03 -- ftl/common.sh@89 -- # spdk_tgt_pid=88569 00:23:46.516 18:10:03 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:46.516 18:10:03 -- ftl/common.sh@91 -- # waitforlisten 88569 00:23:46.516 18:10:03 -- common/autotest_common.sh@829 -- # '[' -z 88569 ']' 00:23:46.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.516 18:10:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.516 18:10:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.516 18:10:03 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.516 18:10:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.516 18:10:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.516 18:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:46.774 [2024-11-26 18:10:03.535070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:46.774 [2024-11-26 18:10:03.535658] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88569 ] 00:23:46.774 [2024-11-26 18:10:03.684824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.032 [2024-11-26 18:10:03.729173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:47.032 [2024-11-26 18:10:03.729367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.293 [2024-11-26 18:10:04.012919] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:47.293 [2024-11-26 18:10:04.012983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:47.293 [2024-11-26 18:10:04.150032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.150080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:47.293 [2024-11-26 18:10:04.150095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:47.293 [2024-11-26 18:10:04.150106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.150169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.150183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:47.293 [2024-11-26 18:10:04.150194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:23:47.293 [2024-11-26 18:10:04.150217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.150242] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:47.293 [2024-11-26 18:10:04.150567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:47.293 [2024-11-26 18:10:04.150601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.150611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:47.293 [2024-11-26 18:10:04.150622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:23:47.293 [2024-11-26 18:10:04.150631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.152082] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:47.293 [2024-11-26 18:10:04.154558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.154592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:47.293 [2024-11-26 18:10:04.154610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.481 ms 00:23:47.293 [2024-11-26 18:10:04.154620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.154742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.154758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:47.293 [2024-11-26 18:10:04.154769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:23:47.293 [2024-11-26 18:10:04.154778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.161657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.161688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:47.293 [2024-11-26 18:10:04.161701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.817 ms 00:23:47.293 [2024-11-26 18:10:04.161711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.161767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.161785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:47.293 [2024-11-26 18:10:04.161795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:23:47.293 [2024-11-26 18:10:04.161814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.161887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.161900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:47.293 [2024-11-26 18:10:04.161914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:47.293 [2024-11-26 18:10:04.161924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.161956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:47.293 [2024-11-26 18:10:04.163607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.163628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:47.293 [2024-11-26 18:10:04.163639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.664 ms 00:23:47.293 [2024-11-26 18:10:04.163649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.163680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.163691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:47.293 [2024-11-26 18:10:04.163707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:47.293 [2024-11-26 18:10:04.163717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.163745] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:47.293 [2024-11-26 18:10:04.163769] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:23:47.293 [2024-11-26 18:10:04.163801] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:47.293 [2024-11-26 18:10:04.163818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:23:47.293 [2024-11-26 18:10:04.163883] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:23:47.293 [2024-11-26 18:10:04.163899] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:47.293 [2024-11-26 18:10:04.163911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:23:47.293 [2024-11-26 18:10:04.163924] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:47.293 [2024-11-26 18:10:04.163947] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:47.293 [2024-11-26 18:10:04.163958] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:47.293 [2024-11-26 18:10:04.163968] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:47.293 [2024-11-26 18:10:04.163977] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:23:47.293 [2024-11-26 18:10:04.163987] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:23:47.293 [2024-11-26 18:10:04.163997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.164007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:47.293 [2024-11-26 18:10:04.164020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:23:47.293 [2024-11-26 18:10:04.164032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.164089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.293 [2024-11-26 18:10:04.164100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:47.293 [2024-11-26 18:10:04.164109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:23:47.293 [2024-11-26 18:10:04.164119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.293 [2024-11-26 18:10:04.164193] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:47.293 [2024-11-26 18:10:04.164205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:47.293 [2024-11-26 18:10:04.164223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:47.293 [2024-11-26 18:10:04.164240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:47.293 [2024-11-26 18:10:04.164269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:47.293 [2024-11-26 18:10:04.164287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:47.293 [2024-11-26 18:10:04.164296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:47.293 [2024-11-26 18:10:04.164306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:47.293 [2024-11-26 18:10:04.164327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:47.293 [2024-11-26 18:10:04.164335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:47.293 [2024-11-26 18:10:04.164353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.293 [2024-11-26 18:10:04.164370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:47.294 [2024-11-26 18:10:04.164378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:23:47.294 [2024-11-26 18:10:04.164387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.294 [2024-11-26 18:10:04.164396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:23:47.294 [2024-11-26 18:10:04.164404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:23:47.294 [2024-11-26 18:10:04.164413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:47.294 [2024-11-26 18:10:04.164430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:47.294 [2024-11-26 18:10:04.164487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164505] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:47.294 [2024-11-26 18:10:04.164514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164532] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:47.294 [2024-11-26 18:10:04.164541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:47.294 [2024-11-26 18:10:04.164568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.294 [2024-11-26 18:10:04.164588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:47.294 [2024-11-26 18:10:04.164596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.294 [2024-11-26 18:10:04.164614] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:47.294 [2024-11-26 18:10:04.164631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:47.294 [2024-11-26 18:10:04.164643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:47.294 [2024-11-26 18:10:04.164663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:47.294 [2024-11-26 18:10:04.164672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:47.294 [2024-11-26 18:10:04.164681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:47.294 [2024-11-26 18:10:04.164690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:47.294 [2024-11-26 18:10:04.164698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:47.294 [2024-11-26 18:10:04.164707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:47.294 [2024-11-26 18:10:04.164718] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:47.294 [2024-11-26 18:10:04.164729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:47.294 [2024-11-26 18:10:04.164751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:23:47.294 [2024-11-26 18:10:04.164780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:23:47.294 [2024-11-26 18:10:04.164790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:23:47.294 [2024-11-26 18:10:04.164803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:23:47.294 [2024-11-26 18:10:04.164813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:23:47.294 [2024-11-26 18:10:04.164862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:23:47.294 [2024-11-26 18:10:04.164872] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:47.294 [2024-11-26 18:10:04.164883] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164893] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:47.294 [2024-11-26 18:10:04.164905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:47.294 [2024-11-26 18:10:04.164916] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:47.294 [2024-11-26 18:10:04.164925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:47.294 [2024-11-26 18:10:04.164936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.164945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:47.294 [2024-11-26 18:10:04.164965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.778 ms 00:23:47.294 [2024-11-26 18:10:04.164978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.173135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.173168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:47.294 [2024-11-26 18:10:04.173181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.117 ms 00:23:47.294 [2024-11-26 18:10:04.173191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.173235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.173254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:47.294 [2024-11-26 18:10:04.173271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:47.294 [2024-11-26 18:10:04.173283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.185285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.185330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:47.294 [2024-11-26 18:10:04.185344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.957 ms 00:23:47.294 [2024-11-26 18:10:04.185355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.185405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.185425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:47.294 [2024-11-26 18:10:04.185440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:47.294 [2024-11-26 18:10:04.185479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.185958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.185972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:47.294 [2024-11-26 18:10:04.185983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:23:47.294 [2024-11-26 18:10:04.186004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.186052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.186064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:47.294 [2024-11-26 18:10:04.186081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:47.294 [2024-11-26 18:10:04.186095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.193527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.193569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:47.294 [2024-11-26 18:10:04.193582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.419 ms 00:23:47.294 [2024-11-26 18:10:04.193592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.196285] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:47.294 [2024-11-26 18:10:04.196321] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:47.294 [2024-11-26 18:10:04.196339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.196349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:23:47.294 [2024-11-26 18:10:04.196360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.636 ms 00:23:47.294 [2024-11-26 18:10:04.196370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.200181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.200218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:23:47.294 [2024-11-26 18:10:04.200232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.750 ms 00:23:47.294 [2024-11-26 18:10:04.200243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.294 [2024-11-26 18:10:04.202097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.294 [2024-11-26 18:10:04.202133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:23:47.294 [2024-11-26 18:10:04.202144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.822 ms 00:23:47.294 [2024-11-26 18:10:04.202154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.295 [2024-11-26 18:10:04.203944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.295 [2024-11-26 18:10:04.203977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:23:47.295 [2024-11-26 18:10:04.203989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.746 ms 00:23:47.295 [2024-11-26 18:10:04.203998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.295 [2024-11-26 18:10:04.204203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.295 [2024-11-26 18:10:04.204221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:47.295 [2024-11-26 18:10:04.204232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:23:47.295 [2024-11-26 18:10:04.204242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.229692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.229749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:47.554 [2024-11-26 18:10:04.229765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.464 ms 00:23:47.554 [2024-11-26 18:10:04.229776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.236719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:47.554 [2024-11-26 18:10:04.237893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.237918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:47.554 [2024-11-26 18:10:04.237931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.052 ms 00:23:47.554 [2024-11-26 18:10:04.237942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.238047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.238072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:23:47.554 [2024-11-26 18:10:04.238083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:47.554 [2024-11-26 18:10:04.238093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.238152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.238170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:47.554 [2024-11-26 18:10:04.238182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:23:47.554 [2024-11-26 18:10:04.238191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.240548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.240580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:23:47.554 [2024-11-26 18:10:04.240597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.327 ms 00:23:47.554 [2024-11-26 18:10:04.240608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.240651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.240662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:47.554 [2024-11-26 18:10:04.240673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:47.554 [2024-11-26 18:10:04.240686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.240749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:47.554 [2024-11-26 18:10:04.240762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.240772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:47.554 [2024-11-26 18:10:04.240782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:47.554 [2024-11-26 18:10:04.240791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.244672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.244714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:47.554 [2024-11-26 18:10:04.244727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.854 ms 00:23:47.554 [2024-11-26 18:10:04.244742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.554 [2024-11-26 18:10:04.244828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.554 [2024-11-26 18:10:04.244840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:47.554 [2024-11-26 18:10:04.244850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:23:47.555 [2024-11-26 18:10:04.244868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.555 [2024-11-26 18:10:04.245928] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 95.619 ms, result 0 00:23:47.555 [2024-11-26 18:10:04.258642] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.555 [2024-11-26 18:10:04.274639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:23:47.555 [2024-11-26 18:10:04.282790] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:47.555 18:10:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.555 18:10:04 -- common/autotest_common.sh@862 -- # return 0 00:23:47.555 18:10:04 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:47.555 18:10:04 -- ftl/common.sh@95 -- # return 0 00:23:47.555 18:10:04 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:47.814 [2024-11-26 18:10:04.531679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.814 [2024-11-26 18:10:04.531727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:47.814 [2024-11-26 18:10:04.531747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:47.814 [2024-11-26 18:10:04.531758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.814 [2024-11-26 18:10:04.531785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.814 [2024-11-26 18:10:04.531804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:47.814 [2024-11-26 18:10:04.531814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:47.814 [2024-11-26 18:10:04.531824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.814 [2024-11-26 18:10:04.531851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.814 [2024-11-26 18:10:04.531862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:47.814 [2024-11-26 18:10:04.531878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:47.814 [2024-11-26 18:10:04.531888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.814 [2024-11-26 18:10:04.531963] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:23:47.814 true 00:23:47.814 18:10:04 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.072 { 00:23:48.072 "name": "ftl", 00:23:48.072 "properties": [ 00:23:48.072 { 00:23:48.072 "name": "superblock_version", 00:23:48.072 "value": 5, 00:23:48.072 "read-only": true 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "name": "base_device", 00:23:48.072 "bands": [ 00:23:48.072 { 00:23:48.072 "id": 0, 00:23:48.072 "state": "CLOSED", 00:23:48.072 "validity": 1.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 1, 00:23:48.072 "state": "CLOSED", 00:23:48.072 "validity": 1.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 2, 00:23:48.072 "state": "CLOSED", 00:23:48.072 "validity": 0.007843137254901933 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 3, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 4, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 5, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 6, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 7, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 8, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 9, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 10, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 11, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 12, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 13, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 14, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 15, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 16, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 17, 00:23:48.072 "state": "FREE", 00:23:48.072 "validity": 0.0 00:23:48.072 } 00:23:48.072 ], 00:23:48.072 "read-only": true 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "name": "cache_device", 00:23:48.072 "type": "bdev", 00:23:48.072 "chunks": [ 00:23:48.072 { 00:23:48.072 "id": 0, 00:23:48.072 "state": "OPEN", 00:23:48.072 "utilization": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 1, 00:23:48.072 "state": "OPEN", 00:23:48.072 "utilization": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 2, 00:23:48.072 "state": "FREE", 00:23:48.072 "utilization": 0.0 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "id": 3, 00:23:48.072 "state": "FREE", 00:23:48.072 "utilization": 0.0 00:23:48.072 } 00:23:48.072 ], 00:23:48.072 "read-only": true 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "name": "verbose_mode", 00:23:48.072 "value": true, 00:23:48.072 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:48.072 }, 00:23:48.072 { 00:23:48.072 "name": "prep_upgrade_on_shutdown", 00:23:48.072 "value": false, 00:23:48.072 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:48.072 } 00:23:48.072 ] 00:23:48.072 } 00:23:48.072 18:10:04 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:23:48.072 18:10:04 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:48.072 18:10:04 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:48.330 Validate MD5 checksum, iteration 1 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:48.330 18:10:05 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:48.330 18:10:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:48.330 18:10:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:48.330 18:10:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:48.330 18:10:05 -- ftl/common.sh@154 -- # return 0 00:23:48.330 18:10:05 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:48.588 [2024-11-26 18:10:05.269857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:48.588 [2024-11-26 18:10:05.269977] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88595 ] 00:23:48.588 [2024-11-26 18:10:05.431802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.588 [2024-11-26 18:10:05.477898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.041  [2024-11-26T18:10:07.531Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-26T18:10:08.099Z] Copying: 1024/1024 [MB] (average 667 MBps) 00:23:51.173 00:23:51.431 18:10:08 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:51.431 18:10:08 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:53.338 Validate MD5 checksum, iteration 2 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@103 -- # sum=e3cde7d6e9c5ee559a84116856bf65f8 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@105 -- # [[ e3cde7d6e9c5ee559a84116856bf65f8 != \e\3\c\d\e\7\d\6\e\9\c\5\e\e\5\5\9\a\8\4\1\1\6\8\5\6\b\f\6\5\f\8 ]] 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:53.338 18:10:09 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:53.338 18:10:09 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:53.338 18:10:09 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:53.338 18:10:09 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:53.338 18:10:09 -- ftl/common.sh@154 -- # return 0 00:23:53.338 18:10:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:53.338 [2024-11-26 18:10:09.962240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:53.338 [2024-11-26 18:10:09.962650] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88652 ] 00:23:53.338 [2024-11-26 18:10:10.118174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.338 [2024-11-26 18:10:10.164982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.711  [2024-11-26T18:10:12.202Z] Copying: 681/1024 [MB] (681 MBps) [2024-11-26T18:10:12.769Z] Copying: 1024/1024 [MB] (average 676 MBps) 00:23:55.843 00:23:55.843 18:10:12 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:55.843 18:10:12 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@103 -- # sum=52d16acf1b31a079fb5318f09023e9cc 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@105 -- # [[ 52d16acf1b31a079fb5318f09023e9cc != \5\2\d\1\6\a\c\f\1\b\3\1\a\0\7\9\f\b\5\3\1\8\f\0\9\0\2\3\e\9\c\c ]] 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:57.741 18:10:14 -- ftl/common.sh@137 -- # [[ -n 88569 ]] 00:23:57.741 18:10:14 -- ftl/common.sh@138 -- # kill -9 88569 00:23:57.741 18:10:14 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:57.741 18:10:14 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:57.741 18:10:14 -- ftl/common.sh@81 -- # local base_bdev= 00:23:57.741 18:10:14 -- ftl/common.sh@82 -- # local cache_bdev= 00:23:57.741 18:10:14 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:57.741 18:10:14 -- ftl/common.sh@89 -- # spdk_tgt_pid=88708 00:23:57.741 18:10:14 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:57.742 18:10:14 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:57.742 18:10:14 -- ftl/common.sh@91 -- # waitforlisten 88708 00:23:57.742 18:10:14 -- common/autotest_common.sh@829 -- # '[' -z 88708 ']' 00:23:57.742 18:10:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.742 18:10:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.742 18:10:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.742 18:10:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.742 18:10:14 -- common/autotest_common.sh@10 -- # set +x 00:23:57.742 [2024-11-26 18:10:14.575849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:57.742 [2024-11-26 18:10:14.575999] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88708 ] 00:23:58.012 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 88569 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:58.012 [2024-11-26 18:10:14.729027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.012 [2024-11-26 18:10:14.778433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.012 [2024-11-26 18:10:14.778688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.375 [2024-11-26 18:10:15.068953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:58.375 [2024-11-26 18:10:15.069030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:58.375 [2024-11-26 18:10:15.206960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.207024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:58.375 [2024-11-26 18:10:15.207041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:58.375 [2024-11-26 18:10:15.207053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.207132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.207146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:58.375 [2024-11-26 18:10:15.207158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:23:58.375 [2024-11-26 18:10:15.207172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.207206] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:58.375 [2024-11-26 18:10:15.207509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:58.375 [2024-11-26 18:10:15.207537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.207549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:58.375 [2024-11-26 18:10:15.207560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:23:58.375 [2024-11-26 18:10:15.207571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.208005] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:58.375 [2024-11-26 18:10:15.212131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.212175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:58.375 [2024-11-26 18:10:15.212197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.133 ms 00:23:58.375 [2024-11-26 18:10:15.212208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.213358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.213390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:58.375 [2024-11-26 18:10:15.213403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:58.375 [2024-11-26 18:10:15.213413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.213870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.213899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:58.375 [2024-11-26 18:10:15.213911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:23:58.375 [2024-11-26 18:10:15.213922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.213972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.213991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:58.375 [2024-11-26 18:10:15.214003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:58.375 [2024-11-26 18:10:15.214013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.214045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.214056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:58.375 [2024-11-26 18:10:15.214071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:58.375 [2024-11-26 18:10:15.214082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.214112] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:58.375 [2024-11-26 18:10:15.215022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.215053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:58.375 [2024-11-26 18:10:15.215066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:23:58.375 [2024-11-26 18:10:15.215076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.215115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.215127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:58.375 [2024-11-26 18:10:15.215145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:58.375 [2024-11-26 18:10:15.215158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.215194] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:58.375 [2024-11-26 18:10:15.215217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:23:58.375 [2024-11-26 18:10:15.215251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:58.375 [2024-11-26 18:10:15.215276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:23:58.375 [2024-11-26 18:10:15.215350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:23:58.375 [2024-11-26 18:10:15.215367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:58.375 [2024-11-26 18:10:15.215396] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:23:58.375 [2024-11-26 18:10:15.215410] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:58.375 [2024-11-26 18:10:15.215422] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:58.375 [2024-11-26 18:10:15.215434] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:58.375 [2024-11-26 18:10:15.215464] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:58.375 [2024-11-26 18:10:15.215476] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:23:58.375 [2024-11-26 18:10:15.215487] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:23:58.375 [2024-11-26 18:10:15.215497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.215508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:58.375 [2024-11-26 18:10:15.215530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:23:58.375 [2024-11-26 18:10:15.215545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.215608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.375 [2024-11-26 18:10:15.215620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:58.375 [2024-11-26 18:10:15.215631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:23:58.375 [2024-11-26 18:10:15.215641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.375 [2024-11-26 18:10:15.215725] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:58.375 [2024-11-26 18:10:15.215738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:58.375 [2024-11-26 18:10:15.215749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:58.375 [2024-11-26 18:10:15.215768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:58.376 [2024-11-26 18:10:15.215792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:58.376 [2024-11-26 18:10:15.215812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:58.376 [2024-11-26 18:10:15.215821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:58.376 [2024-11-26 18:10:15.215834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:58.376 [2024-11-26 18:10:15.215855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:58.376 [2024-11-26 18:10:15.215864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:58.376 [2024-11-26 18:10:15.215883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:58.376 [2024-11-26 18:10:15.215912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:23:58.376 [2024-11-26 18:10:15.215921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.215931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:23:58.376 [2024-11-26 18:10:15.215944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:23:58.376 [2024-11-26 18:10:15.215955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:23:58.376 [2024-11-26 18:10:15.215965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:58.376 [2024-11-26 18:10:15.215975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:58.376 [2024-11-26 18:10:15.215984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:58.376 [2024-11-26 18:10:15.215997] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:58.376 [2024-11-26 18:10:15.216007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:23:58.376 [2024-11-26 18:10:15.216017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:58.376 [2024-11-26 18:10:15.216036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:58.376 [2024-11-26 18:10:15.216045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:58.376 [2024-11-26 18:10:15.216064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:23:58.376 [2024-11-26 18:10:15.216074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:58.376 [2024-11-26 18:10:15.216093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:58.376 [2024-11-26 18:10:15.216102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.216112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:58.376 [2024-11-26 18:10:15.216122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.216140] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:58.376 [2024-11-26 18:10:15.216154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:58.376 [2024-11-26 18:10:15.216165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:58.376 [2024-11-26 18:10:15.216185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:58.376 [2024-11-26 18:10:15.216195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:58.376 [2024-11-26 18:10:15.216204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:58.376 [2024-11-26 18:10:15.216214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:58.376 [2024-11-26 18:10:15.216224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:58.376 [2024-11-26 18:10:15.216233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:58.376 [2024-11-26 18:10:15.216244] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:58.376 [2024-11-26 18:10:15.216257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:58.376 [2024-11-26 18:10:15.216281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:23:58.376 [2024-11-26 18:10:15.216314] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:23:58.376 [2024-11-26 18:10:15.216328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:23:58.376 [2024-11-26 18:10:15.216339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:23:58.376 [2024-11-26 18:10:15.216350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216361] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216371] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:23:58.376 [2024-11-26 18:10:15.216405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:23:58.376 [2024-11-26 18:10:15.216416] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:58.376 [2024-11-26 18:10:15.216435] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216446] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.376 [2024-11-26 18:10:15.216478] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:58.376 [2024-11-26 18:10:15.216489] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:58.376 [2024-11-26 18:10:15.216500] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:58.376 [2024-11-26 18:10:15.216512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.216523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:58.376 [2024-11-26 18:10:15.216540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.826 ms 00:23:58.376 [2024-11-26 18:10:15.216551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.223705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.223748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:58.376 [2024-11-26 18:10:15.223762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.116 ms 00:23:58.376 [2024-11-26 18:10:15.223773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.223816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.223834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:58.376 [2024-11-26 18:10:15.223846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:58.376 [2024-11-26 18:10:15.223860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.236793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.236847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:58.376 [2024-11-26 18:10:15.236874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.891 ms 00:23:58.376 [2024-11-26 18:10:15.236886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.236953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.236965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:58.376 [2024-11-26 18:10:15.236991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:58.376 [2024-11-26 18:10:15.237009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.237149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.237179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:58.376 [2024-11-26 18:10:15.237191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:23:58.376 [2024-11-26 18:10:15.237207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.237254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.237266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:58.376 [2024-11-26 18:10:15.237284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:58.376 [2024-11-26 18:10:15.237295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.245145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.245195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:58.376 [2024-11-26 18:10:15.245210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.825 ms 00:23:58.376 [2024-11-26 18:10:15.245222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.376 [2024-11-26 18:10:15.245349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.376 [2024-11-26 18:10:15.245364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:58.377 [2024-11-26 18:10:15.245376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:58.377 [2024-11-26 18:10:15.245398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.249608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.249649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:58.377 [2024-11-26 18:10:15.249663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.194 ms 00:23:58.377 [2024-11-26 18:10:15.249674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.251005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.251068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:58.377 [2024-11-26 18:10:15.251093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:23:58.377 [2024-11-26 18:10:15.251116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.275658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.275739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:58.377 [2024-11-26 18:10:15.275769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.508 ms 00:23:58.377 [2024-11-26 18:10:15.275789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.275934] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:58.377 [2024-11-26 18:10:15.276000] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:58.377 [2024-11-26 18:10:15.276057] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:58.377 [2024-11-26 18:10:15.276114] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:58.377 [2024-11-26 18:10:15.276135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.276155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:58.377 [2024-11-26 18:10:15.276176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:23:58.377 [2024-11-26 18:10:15.276202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.276330] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:58.377 [2024-11-26 18:10:15.276355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.276374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:58.377 [2024-11-26 18:10:15.276394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:23:58.377 [2024-11-26 18:10:15.276413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.279343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.279389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:58.377 [2024-11-26 18:10:15.279406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.862 ms 00:23:58.377 [2024-11-26 18:10:15.279445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.280230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.280266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:58.377 [2024-11-26 18:10:15.280283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:23:58.377 [2024-11-26 18:10:15.280294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.280339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:58.377 [2024-11-26 18:10:15.280352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:23:58.377 [2024-11-26 18:10:15.280363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:58.377 [2024-11-26 18:10:15.280373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:58.377 [2024-11-26 18:10:15.280636] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:23:58.943 [2024-11-26 18:10:15.737021] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:23:58.943 [2024-11-26 18:10:15.737218] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:23:59.511 [2024-11-26 18:10:16.204829] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:23:59.511 [2024-11-26 18:10:16.204944] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:59.511 [2024-11-26 18:10:16.204964] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:59.511 [2024-11-26 18:10:16.204979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.204992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:59.511 [2024-11-26 18:10:16.205008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 926.073 ms 00:23:59.511 [2024-11-26 18:10:16.205019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.205060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.205072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:59.511 [2024-11-26 18:10:16.205088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:59.511 [2024-11-26 18:10:16.205099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.212772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:59.511 [2024-11-26 18:10:16.212926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.212941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:59.511 [2024-11-26 18:10:16.212954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.810 ms 00:23:59.511 [2024-11-26 18:10:16.212965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.213638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.213667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:23:59.511 [2024-11-26 18:10:16.213680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:23:59.511 [2024-11-26 18:10:16.213691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.215803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.215833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:59.511 [2024-11-26 18:10:16.215846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.094 ms 00:23:59.511 [2024-11-26 18:10:16.215856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.219407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.511 [2024-11-26 18:10:16.219466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:23:59.511 [2024-11-26 18:10:16.219482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.522 ms 00:23:59.511 [2024-11-26 18:10:16.219494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.511 [2024-11-26 18:10:16.219592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.512 [2024-11-26 18:10:16.219607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:59.512 [2024-11-26 18:10:16.219620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:23:59.512 [2024-11-26 18:10:16.219630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.512 [2024-11-26 18:10:16.221924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.512 [2024-11-26 18:10:16.221960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:23:59.512 [2024-11-26 18:10:16.221972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.272 ms 00:23:59.512 [2024-11-26 18:10:16.221983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.512 [2024-11-26 18:10:16.222018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.512 [2024-11-26 18:10:16.222030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:59.512 [2024-11-26 18:10:16.222041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:59.512 [2024-11-26 18:10:16.222052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.512 [2024-11-26 18:10:16.222107] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:59.512 [2024-11-26 18:10:16.222121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.512 [2024-11-26 18:10:16.222144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:59.512 [2024-11-26 18:10:16.222156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:23:59.512 [2024-11-26 18:10:16.222166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.512 [2024-11-26 18:10:16.222230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:59.512 [2024-11-26 18:10:16.222243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:59.512 [2024-11-26 18:10:16.222254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:23:59.512 [2024-11-26 18:10:16.222265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:59.512 [2024-11-26 18:10:16.223546] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1017.783 ms, result 0 00:23:59.512 [2024-11-26 18:10:16.238231] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.512 [2024-11-26 18:10:16.254221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:23:59.512 [2024-11-26 18:10:16.262393] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:00.076 18:10:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.076 18:10:16 -- common/autotest_common.sh@862 -- # return 0 00:24:00.076 18:10:16 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:00.076 18:10:16 -- ftl/common.sh@95 -- # return 0 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:00.076 Validate MD5 checksum, iteration 1 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:00.076 18:10:16 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:00.076 18:10:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:00.076 18:10:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:00.076 18:10:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:00.076 18:10:16 -- ftl/common.sh@154 -- # return 0 00:24:00.076 18:10:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:00.333 [2024-11-26 18:10:17.024594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:00.333 [2024-11-26 18:10:17.024724] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88741 ] 00:24:00.333 [2024-11-26 18:10:17.175497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.333 [2024-11-26 18:10:17.230423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.709  [2024-11-26T18:10:19.589Z] Copying: 643/1024 [MB] (643 MBps) [2024-11-26T18:10:22.132Z] Copying: 1024/1024 [MB] (average 536 MBps) 00:24:05.206 00:24:05.206 18:10:22 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:05.206 18:10:22 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:07.110 Validate MD5 checksum, iteration 2 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@103 -- # sum=e3cde7d6e9c5ee559a84116856bf65f8 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@105 -- # [[ e3cde7d6e9c5ee559a84116856bf65f8 != \e\3\c\d\e\7\d\6\e\9\c\5\e\e\5\5\9\a\8\4\1\1\6\8\5\6\b\f\6\5\f\8 ]] 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:07.110 18:10:23 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:07.110 18:10:23 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:07.110 18:10:23 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:07.110 18:10:23 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:07.110 18:10:23 -- ftl/common.sh@154 -- # return 0 00:24:07.110 18:10:23 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:07.110 [2024-11-26 18:10:23.911185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:07.110 [2024-11-26 18:10:23.911580] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88819 ] 00:24:07.369 [2024-11-26 18:10:24.065546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.369 [2024-11-26 18:10:24.139751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.745  [2024-11-26T18:10:26.237Z] Copying: 704/1024 [MB] (704 MBps) [2024-11-26T18:10:26.817Z] Copying: 1024/1024 [MB] (average 678 MBps) 00:24:09.891 00:24:09.891 18:10:26 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:09.891 18:10:26 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@103 -- # sum=52d16acf1b31a079fb5318f09023e9cc 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@105 -- # [[ 52d16acf1b31a079fb5318f09023e9cc != \5\2\d\1\6\a\c\f\1\b\3\1\a\0\7\9\f\b\5\3\1\8\f\0\9\0\2\3\e\9\c\c ]] 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:24:11.830 18:10:28 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:24:11.830 18:10:28 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:24:11.830 18:10:28 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:24:11.830 18:10:28 -- ftl/common.sh@130 -- # [[ -n 88708 ]] 00:24:11.830 18:10:28 -- ftl/common.sh@131 -- # killprocess 88708 00:24:11.830 18:10:28 -- common/autotest_common.sh@936 -- # '[' -z 88708 ']' 00:24:11.830 18:10:28 -- common/autotest_common.sh@940 -- # kill -0 88708 00:24:11.830 18:10:28 -- common/autotest_common.sh@941 -- # uname 00:24:11.830 18:10:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.830 18:10:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88708 00:24:11.830 killing process with pid 88708 00:24:11.830 18:10:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:11.830 18:10:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:11.830 18:10:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88708' 00:24:11.830 18:10:28 -- common/autotest_common.sh@955 -- # kill 88708 00:24:11.830 18:10:28 -- common/autotest_common.sh@960 -- # wait 88708 00:24:12.090 [2024-11-26 18:10:28.807725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:24:12.090 [2024-11-26 18:10:28.813885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.813933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:12.090 [2024-11-26 18:10:28.813949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:12.090 [2024-11-26 18:10:28.813960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.813988] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:12.090 [2024-11-26 18:10:28.814650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.814675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:12.090 [2024-11-26 18:10:28.814686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:24:12.090 [2024-11-26 18:10:28.814696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.814900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.814933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:12.090 [2024-11-26 18:10:28.814944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:24:12.090 [2024-11-26 18:10:28.814954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.816009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.816043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:12.090 [2024-11-26 18:10:28.816060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:24:12.090 [2024-11-26 18:10:28.816070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.817044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.817072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:24:12.090 [2024-11-26 18:10:28.817084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:24:12.090 [2024-11-26 18:10:28.817093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.818267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.818312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:12.090 [2024-11-26 18:10:28.818325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.139 ms 00:24:12.090 [2024-11-26 18:10:28.818336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.820020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.820056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:12.090 [2024-11-26 18:10:28.820068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.613 ms 00:24:12.090 [2024-11-26 18:10:28.820095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.820181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.820208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:12.090 [2024-11-26 18:10:28.820220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:24:12.090 [2024-11-26 18:10:28.820230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.821267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.821302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:24:12.090 [2024-11-26 18:10:28.821314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.020 ms 00:24:12.090 [2024-11-26 18:10:28.821324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.822407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.822443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:24:12.090 [2024-11-26 18:10:28.822470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:24:12.090 [2024-11-26 18:10:28.822481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.823585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.823619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:12.090 [2024-11-26 18:10:28.823630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.073 ms 00:24:12.090 [2024-11-26 18:10:28.823639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.824732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.090 [2024-11-26 18:10:28.824765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:12.090 [2024-11-26 18:10:28.824776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:24:12.090 [2024-11-26 18:10:28.824785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.090 [2024-11-26 18:10:28.824814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:12.090 [2024-11-26 18:10:28.824831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:12.090 [2024-11-26 18:10:28.824844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:12.090 [2024-11-26 18:10:28.824855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:12.090 [2024-11-26 18:10:28.824866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:12.090 [2024-11-26 18:10:28.824876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:12.090 [2024-11-26 18:10:28.824887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:12.090 [2024-11-26 18:10:28.824897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:12.090 [2024-11-26 18:10:28.824907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.824999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.825010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:12.091 [2024-11-26 18:10:28.825022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:12.091 [2024-11-26 18:10:28.825032] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6cc17920-e1e5-4391-b94b-a54981d27a0e 00:24:12.091 [2024-11-26 18:10:28.825043] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:12.091 [2024-11-26 18:10:28.825052] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:24:12.091 [2024-11-26 18:10:28.825061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:24:12.091 [2024-11-26 18:10:28.825072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:24:12.091 [2024-11-26 18:10:28.825086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:12.091 [2024-11-26 18:10:28.825105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:12.091 [2024-11-26 18:10:28.825115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:12.091 [2024-11-26 18:10:28.825124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:12.091 [2024-11-26 18:10:28.825133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:12.091 [2024-11-26 18:10:28.825142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.091 [2024-11-26 18:10:28.825153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:12.091 [2024-11-26 18:10:28.825164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:24:12.091 [2024-11-26 18:10:28.825173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.826894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.091 [2024-11-26 18:10:28.826921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:12.091 [2024-11-26 18:10:28.826937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.704 ms 00:24:12.091 [2024-11-26 18:10:28.826947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.827016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:12.091 [2024-11-26 18:10:28.827043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:12.091 [2024-11-26 18:10:28.827055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:24:12.091 [2024-11-26 18:10:28.827069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.834056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.834092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:12.091 [2024-11-26 18:10:28.834111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.834121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.834152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.834162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:12.091 [2024-11-26 18:10:28.834173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.834183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.834300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.834315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:12.091 [2024-11-26 18:10:28.834326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.834341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.834363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.834373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:12.091 [2024-11-26 18:10:28.834384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.834394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.847725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.847776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:12.091 [2024-11-26 18:10:28.847796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.847807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:12.091 [2024-11-26 18:10:28.852119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:12.091 [2024-11-26 18:10:28.852219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:12.091 [2024-11-26 18:10:28.852296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:12.091 [2024-11-26 18:10:28.852420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:12.091 [2024-11-26 18:10:28.852511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:12.091 [2024-11-26 18:10:28.852582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:12.091 [2024-11-26 18:10:28.852656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:12.091 [2024-11-26 18:10:28.852666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:12.091 [2024-11-26 18:10:28.852675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:12.091 [2024-11-26 18:10:28.852803] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 38.947 ms, result 0 00:24:12.356 18:10:29 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:12.356 18:10:29 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:12.356 18:10:29 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:24:12.356 18:10:29 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:24:12.356 18:10:29 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:24:12.356 18:10:29 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:12.356 Remove shared memory files 00:24:12.356 18:10:29 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:24:12.356 18:10:29 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:12.356 18:10:29 -- ftl/common.sh@205 -- # rm -f rm -f 00:24:12.356 18:10:29 -- ftl/common.sh@206 -- # rm -f rm -f 00:24:12.356 18:10:29 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid88569 00:24:12.356 18:10:29 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:12.356 18:10:29 -- ftl/common.sh@209 -- # rm -f rm -f 00:24:12.356 00:24:12.356 real 1m6.532s 00:24:12.356 user 1m30.186s 00:24:12.356 sys 0m21.643s 00:24:12.356 18:10:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:12.356 18:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.356 ************************************ 00:24:12.356 END TEST ftl_upgrade_shutdown 00:24:12.356 ************************************ 00:24:12.356 18:10:29 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:24:12.356 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:24:12.356 18:10:29 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:24:12.357 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:24:12.357 18:10:29 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:24:12.357 18:10:29 -- ftl/ftl.sh@14 -- # killprocess 82044 00:24:12.357 18:10:29 -- common/autotest_common.sh@936 -- # '[' -z 82044 ']' 00:24:12.357 18:10:29 -- common/autotest_common.sh@940 -- # kill -0 82044 00:24:12.357 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (82044) - No such process 00:24:12.357 Process with pid 82044 is not found 00:24:12.357 18:10:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 82044 is not found' 00:24:12.357 18:10:29 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:24:12.357 18:10:29 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88905 00:24:12.357 18:10:29 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:12.357 18:10:29 -- ftl/ftl.sh@20 -- # waitforlisten 88905 00:24:12.357 18:10:29 -- common/autotest_common.sh@829 -- # '[' -z 88905 ']' 00:24:12.357 18:10:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.357 18:10:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.357 18:10:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.357 18:10:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.357 18:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.625 [2024-11-26 18:10:29.284058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:12.625 [2024-11-26 18:10:29.284189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88905 ] 00:24:12.625 [2024-11-26 18:10:29.434202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.625 [2024-11-26 18:10:29.481047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:12.625 [2024-11-26 18:10:29.481244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.192 18:10:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.192 18:10:30 -- common/autotest_common.sh@862 -- # return 0 00:24:13.192 18:10:30 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:24:13.450 nvme0n1 00:24:13.752 18:10:30 -- ftl/ftl.sh@22 -- # clear_lvols 00:24:13.752 18:10:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:13.752 18:10:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:13.752 18:10:30 -- ftl/common.sh@28 -- # stores=2e5ee114-336d-46c1-b10e-ab334fb49274 00:24:13.752 18:10:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:24:13.752 18:10:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e5ee114-336d-46c1-b10e-ab334fb49274 00:24:14.011 18:10:30 -- ftl/ftl.sh@23 -- # killprocess 88905 00:24:14.011 18:10:30 -- common/autotest_common.sh@936 -- # '[' -z 88905 ']' 00:24:14.011 18:10:30 -- common/autotest_common.sh@940 -- # kill -0 88905 00:24:14.011 18:10:30 -- common/autotest_common.sh@941 -- # uname 00:24:14.011 18:10:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.011 18:10:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88905 00:24:14.011 18:10:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:14.011 18:10:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:14.011 18:10:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88905' 00:24:14.011 killing process with pid 88905 00:24:14.011 18:10:30 -- common/autotest_common.sh@955 -- # kill 88905 00:24:14.012 18:10:30 -- common/autotest_common.sh@960 -- # wait 88905 00:24:14.602 18:10:31 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:14.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.861 Waiting for block devices as requested 00:24:15.121 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.121 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.121 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.378 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.645 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:24:20.645 18:10:37 -- ftl/ftl.sh@28 -- # remove_shm 00:24:20.646 Remove shared memory files 00:24:20.646 18:10:37 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:20.646 18:10:37 -- ftl/common.sh@205 -- # rm -f rm -f 00:24:20.646 18:10:37 -- ftl/common.sh@206 -- # rm -f rm -f 00:24:20.646 18:10:37 -- ftl/common.sh@207 -- # rm -f rm -f 00:24:20.646 18:10:37 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:20.646 18:10:37 -- ftl/common.sh@209 -- # rm -f rm -f 00:24:20.646 00:24:20.646 real 9m37.777s 00:24:20.646 user 11m46.291s 00:24:20.646 sys 1m26.463s 00:24:20.646 18:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:20.646 ************************************ 00:24:20.646 END TEST ftl 00:24:20.646 ************************************ 00:24:20.646 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.646 18:10:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:20.646 18:10:37 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:20.646 18:10:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:20.646 18:10:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:20.646 18:10:37 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:24:20.646 18:10:37 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:24:20.646 18:10:37 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:20.646 18:10:37 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:20.646 18:10:37 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:24:20.646 18:10:37 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:24:20.646 18:10:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.646 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.646 18:10:37 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:24:20.646 18:10:37 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:24:20.646 18:10:37 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:24:20.646 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:22.575 INFO: APP EXITING 00:24:22.575 INFO: killing all VMs 00:24:22.575 INFO: killing vhost app 00:24:22.575 INFO: EXIT DONE 00:24:23.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:23.506 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:24:23.506 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:24:23.506 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:24:23.764 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:24:24.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:24.585 Cleaning 00:24:24.585 Removing: /var/run/dpdk/spdk0/config 00:24:24.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:24.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:24.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:24.585 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:24.585 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:24.585 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:24.585 Removing: /var/run/dpdk/spdk0 00:24:24.585 Removing: /var/run/dpdk/spdk_pid68497 00:24:24.585 Removing: /var/run/dpdk/spdk_pid68664 00:24:24.585 Removing: /var/run/dpdk/spdk_pid68965 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69032 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69116 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69215 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69300 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69334 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69365 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69440 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69541 00:24:24.585 Removing: /var/run/dpdk/spdk_pid69967 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70014 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70066 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70077 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70146 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70162 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70231 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70249 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70303 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70321 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70363 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70381 00:24:24.585 Removing: /var/run/dpdk/spdk_pid70507 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70544 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70626 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70684 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70705 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70772 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70793 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70828 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70849 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70879 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70899 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70935 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70961 00:24:24.843 Removing: /var/run/dpdk/spdk_pid70994 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71020 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71050 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71076 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71106 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71132 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71162 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71188 00:24:24.843 Removing: /var/run/dpdk/spdk_pid71224 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71244 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71280 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71300 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71331 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71351 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71386 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71407 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71442 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71463 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71498 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71519 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71549 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71575 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71605 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71631 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71661 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71690 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71723 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71752 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71791 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71811 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71847 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71873 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71904 00:24:24.844 Removing: /var/run/dpdk/spdk_pid71982 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72083 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72244 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72317 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72348 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72775 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72903 00:24:24.844 Removing: /var/run/dpdk/spdk_pid72996 00:24:24.844 Removing: /var/run/dpdk/spdk_pid73038 00:24:24.844 Removing: /var/run/dpdk/spdk_pid73063 00:24:24.844 Removing: /var/run/dpdk/spdk_pid73141 00:24:25.102 Removing: /var/run/dpdk/spdk_pid73792 00:24:25.102 Removing: /var/run/dpdk/spdk_pid73823 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74277 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74375 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74469 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74516 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74542 00:24:25.102 Removing: /var/run/dpdk/spdk_pid74566 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76487 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76613 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76617 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76629 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76680 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76684 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76700 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76746 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76750 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76773 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76820 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76827 00:24:25.102 Removing: /var/run/dpdk/spdk_pid76839 00:24:25.102 Removing: /var/run/dpdk/spdk_pid78289 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78377 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78499 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78567 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78622 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78690 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78772 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78846 00:24:25.103 Removing: /var/run/dpdk/spdk_pid78982 00:24:25.103 Removing: /var/run/dpdk/spdk_pid79352 00:24:25.103 Removing: /var/run/dpdk/spdk_pid79383 00:24:25.103 Removing: /var/run/dpdk/spdk_pid79813 00:24:25.103 Removing: /var/run/dpdk/spdk_pid79986 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80080 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80168 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80204 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80234 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80634 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80672 00:24:25.103 Removing: /var/run/dpdk/spdk_pid80728 00:24:25.103 Removing: /var/run/dpdk/spdk_pid81102 00:24:25.103 Removing: /var/run/dpdk/spdk_pid81241 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82044 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82159 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82358 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82444 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82765 00:24:25.103 Removing: /var/run/dpdk/spdk_pid82997 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83372 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83565 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83684 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83725 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83852 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83866 00:24:25.103 Removing: /var/run/dpdk/spdk_pid83902 00:24:25.361 Removing: /var/run/dpdk/spdk_pid84090 00:24:25.361 Removing: /var/run/dpdk/spdk_pid84310 00:24:25.361 Removing: /var/run/dpdk/spdk_pid84695 00:24:25.361 Removing: /var/run/dpdk/spdk_pid85079 00:24:25.361 Removing: /var/run/dpdk/spdk_pid85486 00:24:25.361 Removing: /var/run/dpdk/spdk_pid85940 00:24:25.361 Removing: /var/run/dpdk/spdk_pid86077 00:24:25.361 Removing: /var/run/dpdk/spdk_pid86158 00:24:25.361 Removing: /var/run/dpdk/spdk_pid86755 00:24:25.361 Removing: /var/run/dpdk/spdk_pid86823 00:24:25.361 Removing: /var/run/dpdk/spdk_pid87270 00:24:25.361 Removing: /var/run/dpdk/spdk_pid87618 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88075 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88196 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88230 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88294 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88340 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88387 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88569 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88595 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88652 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88708 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88741 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88819 00:24:25.361 Removing: /var/run/dpdk/spdk_pid88905 00:24:25.361 Clean 00:24:25.361 killing process with pid 60521 00:24:25.619 killing process with pid 60526 00:24:25.619 18:10:42 -- common/autotest_common.sh@1446 -- # return 0 00:24:25.620 18:10:42 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:24:25.620 18:10:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.620 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 18:10:42 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:24:25.620 18:10:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.620 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 18:10:42 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:25.620 18:10:42 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:25.620 18:10:42 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:25.620 18:10:42 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:24:25.620 18:10:42 -- spdk/autotest.sh@383 -- # hostname 00:24:25.620 18:10:42 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:25.879 geninfo: WARNING: invalid characters removed from testname! 00:24:52.423 18:11:06 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:53.356 18:11:10 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.885 18:11:12 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.445 18:11:14 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:00.346 18:11:16 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:02.293 18:11:19 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:04.820 18:11:21 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:04.820 18:11:21 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:25:04.820 18:11:21 -- common/autotest_common.sh@1690 -- $ lcov --version 00:25:04.820 18:11:21 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:25:04.820 18:11:21 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:25:04.820 18:11:21 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:25:04.820 18:11:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:25:04.820 18:11:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:25:04.820 18:11:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:25:04.820 18:11:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:25:04.820 18:11:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:04.820 18:11:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:25:04.820 18:11:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:25:04.820 18:11:21 -- scripts/common.sh@339 -- $ ver1_l=2 00:25:04.820 18:11:21 -- scripts/common.sh@340 -- $ ver2_l=1 00:25:04.820 18:11:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:25:04.820 18:11:21 -- scripts/common.sh@343 -- $ case "$op" in 00:25:04.820 18:11:21 -- scripts/common.sh@344 -- $ : 1 00:25:04.820 18:11:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:25:04.820 18:11:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.820 18:11:21 -- scripts/common.sh@364 -- $ decimal 1 00:25:04.820 18:11:21 -- scripts/common.sh@352 -- $ local d=1 00:25:04.820 18:11:21 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:04.820 18:11:21 -- scripts/common.sh@354 -- $ echo 1 00:25:04.820 18:11:21 -- scripts/common.sh@364 -- $ ver1[v]=1 00:25:04.820 18:11:21 -- scripts/common.sh@365 -- $ decimal 2 00:25:04.820 18:11:21 -- scripts/common.sh@352 -- $ local d=2 00:25:04.820 18:11:21 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:04.820 18:11:21 -- scripts/common.sh@354 -- $ echo 2 00:25:04.820 18:11:21 -- scripts/common.sh@365 -- $ ver2[v]=2 00:25:04.820 18:11:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:25:04.820 18:11:21 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:25:04.820 18:11:21 -- scripts/common.sh@367 -- $ return 0 00:25:04.820 18:11:21 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.820 18:11:21 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:25:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.820 --rc genhtml_branch_coverage=1 00:25:04.820 --rc genhtml_function_coverage=1 00:25:04.820 --rc genhtml_legend=1 00:25:04.820 --rc geninfo_all_blocks=1 00:25:04.820 --rc geninfo_unexecuted_blocks=1 00:25:04.820 00:25:04.820 ' 00:25:04.820 18:11:21 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:25:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.820 --rc genhtml_branch_coverage=1 00:25:04.820 --rc genhtml_function_coverage=1 00:25:04.820 --rc genhtml_legend=1 00:25:04.820 --rc geninfo_all_blocks=1 00:25:04.820 --rc geninfo_unexecuted_blocks=1 00:25:04.820 00:25:04.820 ' 00:25:04.820 18:11:21 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:25:04.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.821 --rc genhtml_branch_coverage=1 00:25:04.821 --rc genhtml_function_coverage=1 00:25:04.821 --rc genhtml_legend=1 00:25:04.821 --rc geninfo_all_blocks=1 00:25:04.821 --rc geninfo_unexecuted_blocks=1 00:25:04.821 00:25:04.821 ' 00:25:04.821 18:11:21 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:25:04.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.821 --rc genhtml_branch_coverage=1 00:25:04.821 --rc genhtml_function_coverage=1 00:25:04.821 --rc genhtml_legend=1 00:25:04.821 --rc geninfo_all_blocks=1 00:25:04.821 --rc geninfo_unexecuted_blocks=1 00:25:04.821 00:25:04.821 ' 00:25:04.821 18:11:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:04.821 18:11:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:04.821 18:11:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.821 18:11:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.821 18:11:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.821 18:11:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.821 18:11:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.821 18:11:21 -- paths/export.sh@5 -- $ export PATH 00:25:04.821 18:11:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.821 18:11:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:04.821 18:11:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:25:04.821 18:11:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732644681.XXXXXX 00:25:04.821 18:11:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732644681.FbXXHi 00:25:04.821 18:11:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:25:04.821 18:11:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:25:04.821 18:11:21 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:04.821 18:11:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:04.821 18:11:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:04.821 18:11:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:04.821 18:11:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:25:04.821 18:11:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:25:04.821 18:11:21 -- common/autotest_common.sh@10 -- $ set +x 00:25:04.821 18:11:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:25:04.821 18:11:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:04.821 18:11:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:04.821 18:11:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:04.821 18:11:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:04.821 18:11:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:04.821 18:11:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:04.821 18:11:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:04.821 18:11:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:04.821 18:11:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:04.821 18:11:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:04.821 + [[ -n 5941 ]] 00:25:04.821 + sudo kill 5941 00:25:04.829 [Pipeline] } 00:25:04.846 [Pipeline] // timeout 00:25:04.853 [Pipeline] } 00:25:04.868 [Pipeline] // stage 00:25:04.874 [Pipeline] } 00:25:04.889 [Pipeline] // catchError 00:25:04.899 [Pipeline] stage 00:25:04.901 [Pipeline] { (Stop VM) 00:25:04.914 [Pipeline] sh 00:25:05.197 + vagrant halt 00:25:08.481 ==> default: Halting domain... 00:25:15.058 [Pipeline] sh 00:25:15.339 + vagrant destroy -f 00:25:18.666 ==> default: Removing domain... 00:25:19.245 [Pipeline] sh 00:25:19.527 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:25:19.536 [Pipeline] } 00:25:19.553 [Pipeline] // stage 00:25:19.558 [Pipeline] } 00:25:19.582 [Pipeline] // dir 00:25:19.587 [Pipeline] } 00:25:19.600 [Pipeline] // wrap 00:25:19.606 [Pipeline] } 00:25:19.617 [Pipeline] // catchError 00:25:19.626 [Pipeline] stage 00:25:19.628 [Pipeline] { (Epilogue) 00:25:19.638 [Pipeline] sh 00:25:19.922 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:25.260 [Pipeline] catchError 00:25:25.262 [Pipeline] { 00:25:25.275 [Pipeline] sh 00:25:25.557 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:25.814 Artifacts sizes are good 00:25:25.821 [Pipeline] } 00:25:25.835 [Pipeline] // catchError 00:25:25.847 [Pipeline] archiveArtifacts 00:25:25.854 Archiving artifacts 00:25:25.958 [Pipeline] cleanWs 00:25:25.969 [WS-CLEANUP] Deleting project workspace... 00:25:25.969 [WS-CLEANUP] Deferred wipeout is used... 00:25:25.974 [WS-CLEANUP] done 00:25:25.976 [Pipeline] } 00:25:25.993 [Pipeline] // stage 00:25:25.999 [Pipeline] } 00:25:26.014 [Pipeline] // node 00:25:26.020 [Pipeline] End of Pipeline 00:25:26.063 Finished: SUCCESS